Jan 26 15:33:43 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 15:33:43 crc restorecon[4696]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:43 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:33:44 crc restorecon[4696]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:33:44 crc restorecon[4696]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 15:33:45 crc kubenswrapper[4713]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 15:33:45 crc kubenswrapper[4713]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 15:33:45 crc kubenswrapper[4713]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 15:33:45 crc kubenswrapper[4713]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 15:33:45 crc kubenswrapper[4713]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 15:33:45 crc kubenswrapper[4713]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.615677 4713 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618584 4713 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618610 4713 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618614 4713 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618620 4713 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618624 4713 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618628 4713 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618632 4713 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618637 4713 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618642 4713 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618648 4713 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618653 4713 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618658 4713 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618663 4713 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618668 4713 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618674 4713 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618679 4713 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618684 4713 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618689 4713 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618693 4713 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618703 4713 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618707 4713 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618711 4713 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618715 4713 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618720 4713 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618729 4713 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618733 4713 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618738 4713 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618742 4713 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618746 4713 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618751 4713 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618755 4713 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618759 4713 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618763 4713 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618768 4713 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618774 4713 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618778 4713 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618783 4713 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618788 4713 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618794 4713 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618799 4713 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618805 4713 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618809 4713 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618813 4713 feature_gate.go:330] unrecognized feature gate: Example Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618817 4713 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618820 4713 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618825 4713 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618828 4713 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618832 4713 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618835 4713 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618839 4713 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618843 4713 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618847 4713 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618850 4713 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618853 4713 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618857 4713 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618860 4713 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618866 4713 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618871 4713 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618875 4713 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618879 4713 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618882 4713 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618886 4713 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618890 4713 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618894 4713 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618897 4713 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618902 4713 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618907 4713 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618910 4713 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618914 4713 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618918 4713 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.618922 4713 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619020 4713 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619068 4713 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619077 4713 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619083 4713 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619090 4713 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619095 4713 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619101 4713 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619107 4713 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619111 4713 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619116 4713 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619121 4713 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619125 4713 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619130 4713 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619134 4713 flags.go:64] FLAG: --cgroup-root="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619139 4713 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619144 4713 flags.go:64] FLAG: --client-ca-file="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619148 4713 flags.go:64] FLAG: --cloud-config="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619152 4713 flags.go:64] FLAG: --cloud-provider="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619157 4713 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619164 4713 flags.go:64] FLAG: --cluster-domain="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619169 4713 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619174 4713 flags.go:64] FLAG: --config-dir="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619179 4713 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619184 4713 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619191 4713 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619195 4713 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619199 4713 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619204 4713 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619209 4713 flags.go:64] FLAG: --contention-profiling="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619212 4713 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619216 4713 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619221 4713 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619225 4713 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619230 4713 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619235 4713 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619239 4713 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619243 4713 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619248 4713 flags.go:64] FLAG: --enable-server="true" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619252 4713 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619259 4713 flags.go:64] FLAG: --event-burst="100" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619264 4713 flags.go:64] FLAG: --event-qps="50" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619268 4713 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619273 4713 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619278 4713 flags.go:64] FLAG: --eviction-hard="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619283 4713 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619288 4713 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619292 4713 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619296 4713 flags.go:64] FLAG: --eviction-soft="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619300 4713 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619304 4713 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619308 4713 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619312 4713 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619317 4713 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619321 4713 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619325 4713 flags.go:64] FLAG: --feature-gates="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619330 4713 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619335 4713 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619339 4713 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619343 4713 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619350 4713 flags.go:64] FLAG: --healthz-port="10248" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619355 4713 flags.go:64] FLAG: --help="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619376 4713 flags.go:64] FLAG: --hostname-override="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619381 4713 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619385 4713 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619390 4713 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619394 4713 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619398 4713 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619402 4713 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619406 4713 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619410 4713 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619414 4713 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619418 4713 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619423 4713 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619427 4713 flags.go:64] FLAG: --kube-reserved="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619434 4713 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619438 4713 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619442 4713 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619447 4713 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619451 4713 flags.go:64] FLAG: --lock-file="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619455 4713 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619459 4713 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619463 4713 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619470 4713 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619474 4713 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619478 4713 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619482 4713 flags.go:64] FLAG: --logging-format="text" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619486 4713 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619491 4713 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619495 4713 flags.go:64] FLAG: --manifest-url="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619499 4713 flags.go:64] FLAG: --manifest-url-header="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619505 4713 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619509 4713 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619515 4713 flags.go:64] FLAG: --max-pods="110" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619519 4713 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619524 4713 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619528 4713 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619532 4713 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619536 4713 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619540 4713 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619544 4713 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619555 4713 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619559 4713 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619563 4713 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619567 4713 flags.go:64] FLAG: --pod-cidr="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619571 4713 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619579 4713 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619583 4713 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619588 4713 flags.go:64] FLAG: --pods-per-core="0" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619592 4713 flags.go:64] FLAG: --port="10250" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619598 4713 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619603 4713 flags.go:64] FLAG: --provider-id="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619607 4713 flags.go:64] FLAG: --qos-reserved="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619612 4713 flags.go:64] FLAG: --read-only-port="10255" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619617 4713 flags.go:64] FLAG: --register-node="true" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619623 4713 flags.go:64] FLAG: --register-schedulable="true" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619627 4713 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619636 4713 flags.go:64] FLAG: --registry-burst="10" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619641 4713 flags.go:64] FLAG: --registry-qps="5" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619645 4713 flags.go:64] FLAG: --reserved-cpus="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619650 4713 flags.go:64] FLAG: --reserved-memory="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619656 4713 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619661 4713 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619665 4713 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619670 4713 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619674 4713 flags.go:64] FLAG: --runonce="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619678 4713 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619682 4713 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619687 4713 flags.go:64] FLAG: --seccomp-default="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619692 4713 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619696 4713 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619700 4713 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619705 4713 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619709 4713 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619713 4713 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619717 4713 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619721 4713 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619725 4713 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619729 4713 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619734 4713 flags.go:64] FLAG: --system-cgroups="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619738 4713 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619745 4713 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619749 4713 flags.go:64] FLAG: --tls-cert-file="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619753 4713 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619759 4713 flags.go:64] FLAG: --tls-min-version="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619763 4713 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619768 4713 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619772 4713 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619776 4713 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619780 4713 flags.go:64] FLAG: --v="2" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619786 4713 flags.go:64] FLAG: --version="false" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619793 4713 flags.go:64] FLAG: --vmodule="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619798 4713 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.619802 4713 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619908 4713 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619913 4713 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619918 4713 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619922 4713 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619926 4713 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619930 4713 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619934 4713 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619937 4713 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619941 4713 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619945 4713 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619948 4713 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619952 4713 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619955 4713 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619959 4713 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619962 4713 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619966 4713 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619969 4713 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619973 4713 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619977 4713 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619980 4713 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619984 4713 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619987 4713 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619991 4713 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619994 4713 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.619998 4713 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620001 4713 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620005 4713 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620008 4713 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620012 4713 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620016 4713 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620019 4713 feature_gate.go:330] unrecognized feature gate: Example Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620023 4713 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620028 4713 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620035 4713 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620039 4713 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620046 4713 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620052 4713 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620057 4713 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620061 4713 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620065 4713 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620070 4713 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620075 4713 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620078 4713 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620082 4713 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620086 4713 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620090 4713 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620093 4713 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620097 4713 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620101 4713 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620104 4713 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620109 4713 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620112 4713 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620116 4713 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620119 4713 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620123 4713 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620126 4713 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620130 4713 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620134 4713 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620137 4713 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620148 4713 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620152 4713 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620155 4713 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620159 4713 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620163 4713 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620168 4713 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620172 4713 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620175 4713 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620179 4713 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620183 4713 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620186 4713 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.620190 4713 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.620203 4713 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.632970 4713 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.633011 4713 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633078 4713 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633086 4713 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633091 4713 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633098 4713 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633104 4713 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633108 4713 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633113 4713 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633117 4713 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633122 4713 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633126 4713 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633130 4713 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633134 4713 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633138 4713 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633142 4713 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633147 4713 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633150 4713 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633154 4713 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633158 4713 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633162 4713 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633165 4713 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633169 4713 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633173 4713 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633176 4713 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633180 4713 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633184 4713 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633187 4713 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633192 4713 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633198 4713 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633202 4713 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633206 4713 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633210 4713 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633217 4713 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633222 4713 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633227 4713 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633233 4713 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633237 4713 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633241 4713 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633244 4713 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633248 4713 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633251 4713 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633255 4713 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633259 4713 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633262 4713 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633265 4713 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633270 4713 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633275 4713 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633279 4713 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633283 4713 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633286 4713 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633290 4713 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633293 4713 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633297 4713 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633303 4713 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633307 4713 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633312 4713 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633316 4713 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633321 4713 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633325 4713 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633329 4713 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633333 4713 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633337 4713 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633341 4713 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633345 4713 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633351 4713 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633356 4713 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633374 4713 feature_gate.go:330] unrecognized feature gate: Example Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633379 4713 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633384 4713 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633388 4713 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633392 4713 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633397 4713 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.633404 4713 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633514 4713 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633521 4713 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633525 4713 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633529 4713 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633533 4713 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633537 4713 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633540 4713 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633544 4713 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633547 4713 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633551 4713 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633554 4713 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633558 4713 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633562 4713 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633566 4713 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633569 4713 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633573 4713 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633577 4713 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633580 4713 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633584 4713 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633587 4713 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633591 4713 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633594 4713 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633598 4713 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633601 4713 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633608 4713 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633612 4713 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633617 4713 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633623 4713 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633626 4713 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633631 4713 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633635 4713 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633638 4713 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633642 4713 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633646 4713 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633652 4713 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633656 4713 feature_gate.go:330] unrecognized feature gate: Example Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633660 4713 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633665 4713 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633670 4713 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633674 4713 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633679 4713 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633714 4713 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633719 4713 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633723 4713 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633727 4713 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633731 4713 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633735 4713 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633738 4713 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633742 4713 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633746 4713 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633749 4713 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633753 4713 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633757 4713 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633760 4713 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633763 4713 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633767 4713 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633772 4713 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633776 4713 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633780 4713 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633783 4713 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633786 4713 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633790 4713 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633794 4713 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633798 4713 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633802 4713 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633805 4713 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633809 4713 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633813 4713 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633817 4713 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633821 4713 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.633825 4713 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.633830 4713 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.634896 4713 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.640158 4713 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.640272 4713 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.644086 4713 server.go:997] "Starting client certificate rotation" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.644129 4713 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.644870 4713 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-05 08:22:04.085724004 +0000 UTC Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.644983 4713 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.658896 4713 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 15:33:45 crc kubenswrapper[4713]: E0126 15:33:45.661281 4713 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.662391 4713 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.671992 4713 log.go:25] "Validated CRI v1 runtime API" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.704853 4713 log.go:25] "Validated CRI v1 image API" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.706777 4713 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.709799 4713 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-15-28-58-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.709858 4713 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.732312 4713 manager.go:217] Machine: {Timestamp:2026-01-26 15:33:45.730976728 +0000 UTC m=+0.867993983 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:6411f4a9-0074-492c-9c99-d43928c7d95b BootID:bc24a9ed-92e4-4376-95db-334eab04cd6c Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:34:26:f7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:34:26:f7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8c:30:eb Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d6:6f:42 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d0:f3:71 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:49:34:2c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:b2:d8:06:0e:ba:d1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:4e:9a:5d:90:2c:1b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.732658 4713 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.732961 4713 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.735166 4713 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.735419 4713 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.735457 4713 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.735704 4713 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.735717 4713 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.735876 4713 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.735967 4713 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.736340 4713 state_mem.go:36] "Initialized new in-memory state store" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.736494 4713 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.737382 4713 kubelet.go:418] "Attempting to sync node with API server" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.737412 4713 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.737438 4713 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.737460 4713 kubelet.go:324] "Adding apiserver pod source" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.737481 4713 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.739951 4713 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.740389 4713 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.741013 4713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:45 crc kubenswrapper[4713]: E0126 15:33:45.741139 4713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.741044 4713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:45 crc kubenswrapper[4713]: E0126 15:33:45.741237 4713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.742981 4713 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744009 4713 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744032 4713 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744039 4713 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744047 4713 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744058 4713 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744066 4713 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744073 4713 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744084 4713 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744093 4713 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744101 4713 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744132 4713 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744140 4713 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744164 4713 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744674 4713 server.go:1280] "Started kubelet" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744751 4713 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.744914 4713 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.745500 4713 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 15:33:45 crc kubenswrapper[4713]: E0126 15:33:45.746154 4713 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e51c8a14c8372 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 15:33:45.744638834 +0000 UTC m=+0.881656069,LastTimestamp:2026-01-26 15:33:45.744638834 +0000 UTC m=+0.881656069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 15:33:45 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.747171 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.747632 4713 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.747718 4713 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.747763 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:08:12.531061414 +0000 UTC Jan 26 15:33:45 crc kubenswrapper[4713]: E0126 15:33:45.747962 4713 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.748021 4713 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.748032 4713 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.748105 4713 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.748254 4713 server.go:460] "Adding debug handlers to kubelet server" Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.748758 4713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:45 crc kubenswrapper[4713]: E0126 15:33:45.748860 4713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:33:45 crc kubenswrapper[4713]: E0126 15:33:45.749420 4713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.749553 4713 factory.go:55] Registering systemd factory Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.749579 4713 factory.go:221] Registration of the systemd container factory successfully Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.750388 4713 factory.go:153] Registering CRI-O factory Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.750412 4713 factory.go:221] Registration of the crio container factory successfully Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.750486 4713 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.750513 4713 factory.go:103] Registering Raw factory Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.750538 4713 manager.go:1196] Started watching for new ooms in manager Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.751243 4713 manager.go:319] Starting recovery of all containers Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768523 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768602 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768626 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768645 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768662 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768677 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768689 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768700 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768715 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768730 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768745 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768787 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768821 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768842 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768855 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768871 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768885 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768902 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.768920 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.770928 4713 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.770959 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.770974 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771011 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771045 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771057 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771068 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771081 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771114 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771133 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771146 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771158 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771172 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771266 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771276 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771289 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771302 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771319 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771330 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771342 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771370 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771382 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771395 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771407 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771419 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771433 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771446 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771458 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771473 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771488 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771501 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771514 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771527 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771540 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771557 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771569 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771581 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771594 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771608 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771620 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771635 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771649 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771667 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771678 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771692 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771705 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771719 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771730 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771742 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771755 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771767 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771780 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771791 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771801 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771813 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771825 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771837 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771852 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771867 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771881 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771897 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771910 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771922 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771932 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771944 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771956 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771968 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771981 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.771993 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772008 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772025 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772039 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772050 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772061 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772074 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772084 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772097 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772109 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772119 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772131 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772143 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772156 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772168 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772180 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772193 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772206 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772223 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772237 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772260 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772276 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772291 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772306 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772320 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772334 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772346 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772372 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772384 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772397 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772410 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772421 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772434 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772446 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772457 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772469 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772481 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772493 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772506 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772522 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772539 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772550 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772561 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772572 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772583 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772593 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772605 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772618 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772632 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772643 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772654 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772666 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772680 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772694 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772709 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772725 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772740 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772752 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772763 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772773 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772783 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772796 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772808 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772819 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772830 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772841 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772854 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772870 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772886 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772901 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772915 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772926 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772936 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772946 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772957 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772968 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772979 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.772991 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773002 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773013 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773026 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773039 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773055 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773071 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773085 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773098 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773114 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773131 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773149 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773166 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773180 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773194 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773205 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773216 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773227 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773239 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.773253 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774621 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774653 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774692 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774710 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774721 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774787 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774800 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774812 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774844 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774861 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774877 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774890 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774921 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774933 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774948 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774961 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.774972 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.775006 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.775019 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.775037 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.775051 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.775089 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.775119 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.775130 4713 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.775140 4713 reconstruct.go:97] "Volume reconstruction finished" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.775172 4713 reconciler.go:26] "Reconciler: start to sync state" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.779745 4713 manager.go:324] Recovery completed Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.792358 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.794584 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.794726 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.794887 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.796120 4713 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.796137 4713 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.796168 4713 state_mem.go:36] "Initialized new in-memory state store" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.799620 4713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.802136 4713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.802208 4713 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.802248 4713 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 15:33:45 crc kubenswrapper[4713]: E0126 15:33:45.802305 4713 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 15:33:45 crc kubenswrapper[4713]: W0126 15:33:45.804841 4713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:45 crc kubenswrapper[4713]: E0126 15:33:45.804933 4713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.810644 4713 policy_none.go:49] "None policy: Start" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.811654 4713 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.811689 4713 state_mem.go:35] "Initializing new in-memory state store" Jan 26 15:33:45 crc kubenswrapper[4713]: E0126 15:33:45.848073 4713 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.865857 4713 manager.go:334] "Starting Device Plugin manager" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.865968 4713 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.865988 4713 server.go:79] "Starting device plugin registration server" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.866581 4713 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.866609 4713 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.867055 4713 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.867203 4713 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.867224 4713 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 15:33:45 crc kubenswrapper[4713]: E0126 15:33:45.876656 4713 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.903271 4713 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.903435 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.909050 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.909117 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.909127 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.909630 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.909679 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.910429 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.911397 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.911428 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.911437 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.912230 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.912278 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.912291 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.912565 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.912792 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.912870 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.913963 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.913993 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.914007 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.914006 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.914107 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.914125 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.914273 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.914463 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.914513 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.915154 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.915239 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.915256 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.915336 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.915385 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.915451 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.915704 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.915495 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.915925 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.917091 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.917120 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.917137 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.917231 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.917269 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.917279 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.917529 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.917577 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.919020 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.919079 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.919099 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:45 crc kubenswrapper[4713]: E0126 15:33:45.950618 4713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.967749 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.969024 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.969091 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.969104 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.969136 4713 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 15:33:45 crc kubenswrapper[4713]: E0126 15:33:45.969903 4713 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.977839 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.977922 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.977973 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.977996 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.978029 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.978051 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.978205 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.978304 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.978386 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.978429 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.978462 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.978490 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.978524 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.978555 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:33:45 crc kubenswrapper[4713]: I0126 15:33:45.978598 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080033 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080101 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080128 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080152 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080169 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080188 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080204 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080222 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080240 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080257 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080277 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080295 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080315 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080333 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080350 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080348 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080485 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080545 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080587 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080626 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080662 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080653 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080610 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080704 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080667 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080683 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080748 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080753 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080809 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.080825 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.171168 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.172848 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.172912 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.172929 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.172974 4713 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 15:33:46 crc kubenswrapper[4713]: E0126 15:33:46.173715 4713 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.233825 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.239800 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.259070 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.278419 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: W0126 15:33:46.281614 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-aa7f933e342f3dd185e7bba9956f6b77f64ce45c1664a74c856fa49a60d49df1 WatchSource:0}: Error finding container aa7f933e342f3dd185e7bba9956f6b77f64ce45c1664a74c856fa49a60d49df1: Status 404 returned error can't find the container with id aa7f933e342f3dd185e7bba9956f6b77f64ce45c1664a74c856fa49a60d49df1 Jan 26 15:33:46 crc kubenswrapper[4713]: W0126 15:33:46.282620 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-117b157ae61778b6f2e6257ae010aefa74271ea5b2137bfdfcb835dee9db6739 WatchSource:0}: Error finding container 117b157ae61778b6f2e6257ae010aefa74271ea5b2137bfdfcb835dee9db6739: Status 404 returned error can't find the container with id 117b157ae61778b6f2e6257ae010aefa74271ea5b2137bfdfcb835dee9db6739 Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.286772 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:33:46 crc kubenswrapper[4713]: W0126 15:33:46.288018 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-4e0b87b13aec973368653b67bd93b97d7b0cfa42254d65f0e5094acf19d8ecc9 WatchSource:0}: Error finding container 4e0b87b13aec973368653b67bd93b97d7b0cfa42254d65f0e5094acf19d8ecc9: Status 404 returned error can't find the container with id 4e0b87b13aec973368653b67bd93b97d7b0cfa42254d65f0e5094acf19d8ecc9 Jan 26 15:33:46 crc kubenswrapper[4713]: W0126 15:33:46.296396 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e720859a6447fd702cf9dd3bcf1e6778a2084fcf0f94fdeab2d6ec7bb37e7c57 WatchSource:0}: Error finding container e720859a6447fd702cf9dd3bcf1e6778a2084fcf0f94fdeab2d6ec7bb37e7c57: Status 404 returned error can't find the container with id e720859a6447fd702cf9dd3bcf1e6778a2084fcf0f94fdeab2d6ec7bb37e7c57 Jan 26 15:33:46 crc kubenswrapper[4713]: W0126 15:33:46.305332 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-85fc86911712f64db3b358b3c220e4f5b89cf27290c6be949521d0bf01df8817 WatchSource:0}: Error finding container 85fc86911712f64db3b358b3c220e4f5b89cf27290c6be949521d0bf01df8817: Status 404 returned error can't find the container with id 85fc86911712f64db3b358b3c220e4f5b89cf27290c6be949521d0bf01df8817 Jan 26 15:33:46 crc kubenswrapper[4713]: E0126 15:33:46.351942 4713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.574708 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.576449 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.576540 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.576565 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.576618 4713 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 15:33:46 crc kubenswrapper[4713]: E0126 15:33:46.577342 4713 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 26 15:33:46 crc kubenswrapper[4713]: W0126 15:33:46.630218 4713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:46 crc kubenswrapper[4713]: E0126 15:33:46.630319 4713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.746085 4713 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.748209 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 23:27:16.6782362 +0000 UTC Jan 26 15:33:46 crc kubenswrapper[4713]: W0126 15:33:46.748724 4713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:46 crc kubenswrapper[4713]: E0126 15:33:46.748859 4713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:33:46 crc kubenswrapper[4713]: W0126 15:33:46.789951 4713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:46 crc kubenswrapper[4713]: E0126 15:33:46.790043 4713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.809090 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4e0b87b13aec973368653b67bd93b97d7b0cfa42254d65f0e5094acf19d8ecc9"} Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.810220 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"117b157ae61778b6f2e6257ae010aefa74271ea5b2137bfdfcb835dee9db6739"} Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.811418 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"aa7f933e342f3dd185e7bba9956f6b77f64ce45c1664a74c856fa49a60d49df1"} Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.812776 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"85fc86911712f64db3b358b3c220e4f5b89cf27290c6be949521d0bf01df8817"} Jan 26 15:33:46 crc kubenswrapper[4713]: I0126 15:33:46.814116 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e720859a6447fd702cf9dd3bcf1e6778a2084fcf0f94fdeab2d6ec7bb37e7c57"} Jan 26 15:33:47 crc kubenswrapper[4713]: W0126 15:33:47.143989 4713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:47 crc kubenswrapper[4713]: E0126 15:33:47.144070 4713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:33:47 crc kubenswrapper[4713]: E0126 15:33:47.153509 4713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.377924 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.380022 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.380076 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.380133 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.380167 4713 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 15:33:47 crc kubenswrapper[4713]: E0126 15:33:47.380752 4713 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.745864 4713 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.748875 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:43:46.261726814 +0000 UTC Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.793504 4713 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 15:33:47 crc kubenswrapper[4713]: E0126 15:33:47.795316 4713 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.820073 4713 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3" exitCode=0 Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.820176 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3"} Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.820235 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.822103 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.822140 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.822150 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.823413 4713 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07" exitCode=0 Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.823479 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07"} Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.823666 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.824805 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.826019 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.826066 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.826086 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.826394 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.826428 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.826442 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.830960 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b"} Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.831008 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420"} Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.831016 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.831028 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7"} Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.831049 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5"} Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.832043 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.832116 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.832136 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.832700 4713 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a" exitCode=0 Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.832754 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a"} Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.832738 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.833483 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.833526 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.833542 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.835191 4713 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef" exitCode=0 Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.835228 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef"} Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.835291 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.836216 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.836254 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:47 crc kubenswrapper[4713]: I0126 15:33:47.836268 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.491653 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.746669 4713 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.749636 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 20:16:02.783758479 +0000 UTC Jan 26 15:33:48 crc kubenswrapper[4713]: E0126 15:33:48.754561 4713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="3.2s" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.840193 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15"} Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.842070 4713 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc" exitCode=0 Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.842109 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc"} Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.842217 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.843056 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.843090 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.843100 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:48 crc kubenswrapper[4713]: W0126 15:33:48.845133 4713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:48 crc kubenswrapper[4713]: E0126 15:33:48.845246 4713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.848053 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a24ccbe375d40bd63a664c32c9a308c1127bcd914d25bbfbb991bbdf0d7d3108"} Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.848129 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.849292 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.849330 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.849340 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.851028 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47"} Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.851080 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.852100 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.852164 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.852185 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:48 crc kubenswrapper[4713]: W0126 15:33:48.888629 4713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:48 crc kubenswrapper[4713]: E0126 15:33:48.888721 4713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.981244 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.983536 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.983587 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.983604 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:48 crc kubenswrapper[4713]: I0126 15:33:48.983643 4713 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 15:33:48 crc kubenswrapper[4713]: E0126 15:33:48.984561 4713 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 26 15:33:49 crc kubenswrapper[4713]: W0126 15:33:49.078415 4713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:49 crc kubenswrapper[4713]: E0126 15:33:49.078544 4713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:33:49 crc kubenswrapper[4713]: W0126 15:33:49.290000 4713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:49 crc kubenswrapper[4713]: E0126 15:33:49.290109 4713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.746915 4713 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.749909 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:50:54.091476535 +0000 UTC Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.857566 4713 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4" exitCode=0 Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.857656 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4"} Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.857717 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.859054 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.859102 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.859121 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.862790 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce"} Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.862846 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709"} Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.862875 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.865186 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.865255 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.865272 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.872941 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888"} Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.873010 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24"} Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.873034 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.873095 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.876803 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.876846 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.876867 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.876796 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.876925 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:49 crc kubenswrapper[4713]: I0126 15:33:49.876941 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.750289 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 09:57:40.506134376 +0000 UTC Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.878041 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22"} Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.878086 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493"} Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.878203 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.878855 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.878875 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.878883 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.889388 4713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.889445 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.889349 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370"} Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.889497 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241"} Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.889515 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239"} Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.889526 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858"} Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.890422 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.890453 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:50 crc kubenswrapper[4713]: I0126 15:33:50.890465 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.492671 4713 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.492810 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.750759 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:02:33.181483882 +0000 UTC Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.896628 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.896832 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.898811 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.898867 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.898890 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.902635 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140"} Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.902758 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.902806 4713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.902871 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.904500 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.904585 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.904612 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.905134 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.905191 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:51 crc kubenswrapper[4713]: I0126 15:33:51.905216 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:52 crc kubenswrapper[4713]: I0126 15:33:52.185614 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:52 crc kubenswrapper[4713]: I0126 15:33:52.187278 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:52 crc kubenswrapper[4713]: I0126 15:33:52.187329 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:52 crc kubenswrapper[4713]: I0126 15:33:52.187347 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:52 crc kubenswrapper[4713]: I0126 15:33:52.187423 4713 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 15:33:52 crc kubenswrapper[4713]: I0126 15:33:52.188398 4713 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 15:33:52 crc kubenswrapper[4713]: I0126 15:33:52.751061 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 15:43:51.148934702 +0000 UTC Jan 26 15:33:52 crc kubenswrapper[4713]: I0126 15:33:52.906114 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:52 crc kubenswrapper[4713]: I0126 15:33:52.907657 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:52 crc kubenswrapper[4713]: I0126 15:33:52.907744 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:52 crc kubenswrapper[4713]: I0126 15:33:52.907774 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:53 crc kubenswrapper[4713]: I0126 15:33:53.751213 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:05:01.988839036 +0000 UTC Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.510349 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.510609 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.512431 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.512521 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.512547 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.553908 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.554188 4713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.554258 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.556665 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.556733 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.556752 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.565504 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.751642 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:21:03.350635619 +0000 UTC Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.913164 4713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.913271 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.914942 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.915043 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.915071 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.931033 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.931359 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.933472 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.933537 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:54 crc kubenswrapper[4713]: I0126 15:33:54.933549 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:55 crc kubenswrapper[4713]: I0126 15:33:55.752567 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:59:43.752523786 +0000 UTC Jan 26 15:33:55 crc kubenswrapper[4713]: E0126 15:33:55.876801 4713 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.032349 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.032666 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.034291 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.034388 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.034404 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.094916 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.095241 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.097253 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.097300 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.097314 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.753244 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 06:57:12.619873391 +0000 UTC Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.819964 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.820338 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.822913 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.823081 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.823113 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.826663 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.919102 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.920267 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.920340 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.920356 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:56 crc kubenswrapper[4713]: I0126 15:33:56.923226 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:33:57 crc kubenswrapper[4713]: I0126 15:33:57.753484 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:00:40.713665088 +0000 UTC Jan 26 15:33:57 crc kubenswrapper[4713]: I0126 15:33:57.922241 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:57 crc kubenswrapper[4713]: I0126 15:33:57.923267 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:57 crc kubenswrapper[4713]: I0126 15:33:57.923309 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:57 crc kubenswrapper[4713]: I0126 15:33:57.923322 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:33:58 crc kubenswrapper[4713]: I0126 15:33:58.753916 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:49:13.424071336 +0000 UTC Jan 26 15:33:59 crc kubenswrapper[4713]: I0126 15:33:59.754667 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:50:25.137367178 +0000 UTC Jan 26 15:33:59 crc kubenswrapper[4713]: I0126 15:33:59.783965 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 15:33:59 crc kubenswrapper[4713]: I0126 15:33:59.784188 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:33:59 crc kubenswrapper[4713]: I0126 15:33:59.785488 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:33:59 crc kubenswrapper[4713]: I0126 15:33:59.785517 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:33:59 crc kubenswrapper[4713]: I0126 15:33:59.785525 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:00 crc kubenswrapper[4713]: I0126 15:34:00.711132 4713 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 15:34:00 crc kubenswrapper[4713]: I0126 15:34:00.711195 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 15:34:00 crc kubenswrapper[4713]: I0126 15:34:00.719494 4713 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 15:34:00 crc kubenswrapper[4713]: I0126 15:34:00.719602 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 15:34:00 crc kubenswrapper[4713]: I0126 15:34:00.755444 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 22:58:22.708787181 +0000 UTC Jan 26 15:34:01 crc kubenswrapper[4713]: I0126 15:34:01.079174 4713 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 15:34:01 crc kubenswrapper[4713]: I0126 15:34:01.079296 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 15:34:01 crc kubenswrapper[4713]: I0126 15:34:01.492482 4713 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 15:34:01 crc kubenswrapper[4713]: I0126 15:34:01.492608 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 15:34:01 crc kubenswrapper[4713]: I0126 15:34:01.755602 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:34:31.154939141 +0000 UTC Jan 26 15:34:02 crc kubenswrapper[4713]: I0126 15:34:02.756060 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 22:30:43.513378996 +0000 UTC Jan 26 15:34:03 crc kubenswrapper[4713]: I0126 15:34:03.757287 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:22:45.768021195 +0000 UTC Jan 26 15:34:04 crc kubenswrapper[4713]: I0126 15:34:04.559982 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:04 crc kubenswrapper[4713]: I0126 15:34:04.560231 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:04 crc kubenswrapper[4713]: I0126 15:34:04.560937 4713 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 15:34:04 crc kubenswrapper[4713]: I0126 15:34:04.561034 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 15:34:04 crc kubenswrapper[4713]: I0126 15:34:04.561935 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:04 crc kubenswrapper[4713]: I0126 15:34:04.562120 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:04 crc kubenswrapper[4713]: I0126 15:34:04.562200 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:04 crc kubenswrapper[4713]: I0126 15:34:04.566058 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:04 crc kubenswrapper[4713]: I0126 15:34:04.758292 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 11:09:14.219180559 +0000 UTC Jan 26 15:34:04 crc kubenswrapper[4713]: I0126 15:34:04.942854 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:04 crc kubenswrapper[4713]: I0126 15:34:04.943513 4713 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 15:34:04 crc kubenswrapper[4713]: I0126 15:34:04.943587 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 15:34:04 crc kubenswrapper[4713]: I0126 15:34:04.943788 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:04 crc kubenswrapper[4713]: I0126 15:34:04.943840 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:04 crc kubenswrapper[4713]: I0126 15:34:04.943853 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.718995 4713 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 15:34:05 crc kubenswrapper[4713]: E0126 15:34:05.720026 4713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.720411 4713 trace.go:236] Trace[184532721]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 15:33:54.942) (total time: 10778ms): Jan 26 15:34:05 crc kubenswrapper[4713]: Trace[184532721]: ---"Objects listed" error: 10778ms (15:34:05.720) Jan 26 15:34:05 crc kubenswrapper[4713]: Trace[184532721]: [10.778294879s] [10.778294879s] END Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.720479 4713 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.723523 4713 trace.go:236] Trace[2052293051]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 15:33:54.542) (total time: 11181ms): Jan 26 15:34:05 crc kubenswrapper[4713]: Trace[2052293051]: ---"Objects listed" error: 11181ms (15:34:05.723) Jan 26 15:34:05 crc kubenswrapper[4713]: Trace[2052293051]: [11.181121215s] [11.181121215s] END Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.723568 4713 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 15:34:05 crc kubenswrapper[4713]: E0126 15:34:05.726186 4713 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.726845 4713 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.727350 4713 trace.go:236] Trace[168159749]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 15:33:55.262) (total time: 10464ms): Jan 26 15:34:05 crc kubenswrapper[4713]: Trace[168159749]: ---"Objects listed" error: 10464ms (15:34:05.727) Jan 26 15:34:05 crc kubenswrapper[4713]: Trace[168159749]: [10.464523909s] [10.464523909s] END Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.727439 4713 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.729677 4713 trace.go:236] Trace[882595113]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 15:33:53.173) (total time: 12556ms): Jan 26 15:34:05 crc kubenswrapper[4713]: Trace[882595113]: ---"Objects listed" error: 12555ms (15:34:05.729) Jan 26 15:34:05 crc kubenswrapper[4713]: Trace[882595113]: [12.556084109s] [12.556084109s] END Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.729715 4713 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.750634 4713 apiserver.go:52] "Watching apiserver" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.753946 4713 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.754259 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.754764 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.754872 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:05 crc kubenswrapper[4713]: E0126 15:34:05.754947 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.754951 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.755012 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.755061 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:05 crc kubenswrapper[4713]: E0126 15:34:05.755085 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.755175 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:05 crc kubenswrapper[4713]: E0126 15:34:05.755460 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.758163 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.758339 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.759201 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:04:35.105261501 +0000 UTC Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.759777 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.759976 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.760146 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.760349 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.760478 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.760614 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.763054 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.796164 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.830608 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.849195 4713 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.863952 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.877247 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.896808 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.909799 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.928583 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.928652 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.928688 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.928717 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.928743 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.928810 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.928837 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.928867 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.928893 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.928921 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.928922 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.928951 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929177 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929206 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929237 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929262 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929288 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929312 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929342 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929325 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929397 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929425 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929450 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929471 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929503 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929593 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929625 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929654 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929678 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929737 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929799 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929828 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929856 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929887 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929914 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929944 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930006 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930036 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930062 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930147 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930178 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930204 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930228 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930253 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930276 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930298 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930321 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930345 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930385 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930411 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929464 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930443 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929504 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929510 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929802 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929843 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930467 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930499 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930533 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929888 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930565 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930597 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.929925 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930006 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930041 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930157 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930186 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930227 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930301 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930379 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930403 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930421 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930402 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930562 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930844 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930861 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930867 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930887 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930912 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930914 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930937 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930942 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930962 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.930983 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931001 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931020 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931070 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931090 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931109 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931127 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931112 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931149 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931157 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931138 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931178 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931198 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931178 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931317 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931349 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931392 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931422 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931450 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931476 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931501 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931527 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931552 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931575 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931599 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931619 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931638 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931660 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931682 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931706 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931727 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931748 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931769 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931788 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931806 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931825 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931842 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931861 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931881 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931902 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931921 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931938 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931957 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931976 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.931993 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.932015 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.932033 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.932051 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.932067 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.932086 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.932101 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.932116 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.932131 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.934866 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.934911 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.934935 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.934988 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935016 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935065 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935099 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935150 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935385 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935428 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935480 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935511 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935564 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935593 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935645 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935673 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935742 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935772 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935824 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935852 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935898 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.935923 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936005 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936030 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936056 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936080 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936101 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936126 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936146 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936167 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936184 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936200 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936228 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936246 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936264 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936280 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936387 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936410 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936431 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936451 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936469 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936492 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936512 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936532 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936548 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936581 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936598 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936614 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936632 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936648 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936906 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936927 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936946 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.936969 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.937265 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.937298 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.938275 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.938397 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.938811 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.939338 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.939518 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.939664 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.939925 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940017 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940055 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940107 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940130 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940176 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940196 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940232 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940275 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940298 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940345 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940397 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940428 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940468 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940488 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940507 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940542 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940563 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940581 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940600 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940644 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940664 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940703 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940732 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940802 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940830 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940852 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940893 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940915 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940956 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940978 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.940999 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941018 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941056 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941077 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941127 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941154 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941195 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941234 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941253 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941289 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941315 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941339 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941398 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941431 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941472 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941503 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941527 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941547 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941627 4713 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941642 4713 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941654 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941664 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941677 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941690 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941700 4713 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941713 4713 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941723 4713 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941735 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941767 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941780 4713 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941791 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941802 4713 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941813 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941823 4713 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941833 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941844 4713 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941856 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941866 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941877 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941886 4713 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941896 4713 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941908 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941919 4713 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941929 4713 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941948 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941960 4713 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941970 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941979 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941989 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.941998 4713 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.942009 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.942019 4713 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.942029 4713 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.942040 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:05 crc kubenswrapper[4713]: E0126 15:34:05.942117 4713 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:05 crc kubenswrapper[4713]: E0126 15:34:05.942201 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:06.442176805 +0000 UTC m=+21.579194040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:05 crc kubenswrapper[4713]: E0126 15:34:05.942275 4713 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:05 crc kubenswrapper[4713]: E0126 15:34:05.942300 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:06.442293928 +0000 UTC m=+21.579311163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.944731 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.944953 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.946221 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.946843 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.947094 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.947839 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.948217 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.948392 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.948297 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.948524 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.948678 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.948832 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.949170 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.949285 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.949341 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.949656 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.950005 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.950026 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.950091 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.950388 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.950550 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.950856 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.950918 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.951205 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.951218 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.951325 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.951531 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.951839 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.952376 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.952510 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.952575 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.952867 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.952957 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.953379 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.953639 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.953661 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.954024 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.954610 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.955012 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.955126 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.955244 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.955394 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.955564 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.955837 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.956139 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.956179 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.956220 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.956302 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.956547 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.956718 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.956809 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.956893 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.956906 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.957222 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.957232 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.957283 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.957323 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.957401 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.957404 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.957641 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.957930 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.957971 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.958199 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.958307 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.958345 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.958446 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.958477 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.958664 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.958846 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.958888 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.958918 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.958968 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.959006 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.959281 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.959315 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.959335 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.959568 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.959600 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.959685 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.959944 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.960053 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.960199 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.960535 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.960498 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.960871 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.960879 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.961387 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.961490 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.961862 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.962151 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.962427 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.962524 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.962867 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.962870 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.963274 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.963412 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.963718 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.963727 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.963849 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.964169 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.964638 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.964893 4713 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.964961 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.966124 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: E0126 15:34:05.969982 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:05 crc kubenswrapper[4713]: E0126 15:34:05.970018 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:05 crc kubenswrapper[4713]: E0126 15:34:05.970032 4713 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:05 crc kubenswrapper[4713]: E0126 15:34:05.970106 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:06.470081551 +0000 UTC m=+21.607098786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.973104 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.973414 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.973824 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.974147 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.974462 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.974503 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.974535 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: E0126 15:34:05.974648 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:34:06.474626401 +0000 UTC m=+21.611643636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.974794 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.974820 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.975092 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.975316 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.983298 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.983521 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.974053 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.983942 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.985966 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.986146 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.986508 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.987419 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.987449 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.987525 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.987711 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.989725 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.990017 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.992104 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.993341 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.994284 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.994383 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.994424 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.994446 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.994928 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.995644 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.997427 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.997498 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.998060 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.998688 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:05 crc kubenswrapper[4713]: I0126 15:34:05.999154 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.000925 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.001634 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.001991 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.002722 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.004358 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.006589 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.006638 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.007420 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.007573 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.008624 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.008874 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.008894 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.009147 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.009179 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.009199 4713 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.009285 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:06.509256479 +0000 UTC m=+21.646273714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.010892 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.011088 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.011591 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.016119 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.016558 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.024316 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.024076 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.024475 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.025113 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.026471 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.033836 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.043884 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044329 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044379 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044422 4713 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044432 4713 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044441 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044450 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044460 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044469 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044477 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044485 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044493 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044501 4713 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044509 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044519 4713 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044527 4713 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044536 4713 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044544 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044552 4713 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044559 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044568 4713 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044577 4713 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044585 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044594 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044603 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044612 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044620 4713 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044628 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044638 4713 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044646 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044654 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044662 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044674 4713 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044682 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044690 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044698 4713 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044707 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044700 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044747 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044715 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044787 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044827 4713 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044839 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044849 4713 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044860 4713 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044870 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044879 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044890 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044900 4713 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044910 4713 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044923 4713 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044932 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044942 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044952 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044963 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044973 4713 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044983 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.044993 4713 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045005 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045016 4713 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045026 4713 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045036 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045046 4713 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045064 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045073 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045083 4713 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045112 4713 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045124 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045134 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045142 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045151 4713 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045159 4713 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045168 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045176 4713 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045186 4713 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045195 4713 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045204 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045212 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045221 4713 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045232 4713 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045241 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045250 4713 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045259 4713 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045267 4713 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045276 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045286 4713 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045296 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045307 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045318 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045327 4713 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045336 4713 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045345 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045353 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045378 4713 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045387 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045396 4713 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045405 4713 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045414 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045424 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045435 4713 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045444 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045452 4713 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045461 4713 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045470 4713 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045479 4713 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045487 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045497 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045506 4713 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045521 4713 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045528 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045537 4713 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045548 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045557 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045565 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045574 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045582 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045591 4713 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045599 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045607 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045616 4713 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045625 4713 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045633 4713 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045642 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045650 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045658 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045666 4713 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045674 4713 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045683 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045692 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045701 4713 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045710 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045719 4713 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045728 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045736 4713 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045744 4713 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045753 4713 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045761 4713 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045769 4713 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045777 4713 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045786 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045793 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045802 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045810 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045819 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045828 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045837 4713 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045845 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045853 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045862 4713 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045870 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045878 4713 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045886 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045895 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045903 4713 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045914 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045921 4713 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045930 4713 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045940 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045949 4713 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045958 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.045968 4713 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.047138 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.056172 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.061164 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.062531 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.065274 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.076809 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.078870 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.090489 4713 csr.go:261] certificate signing request csr-8sjf2 is approved, waiting to be issued Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.096762 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.101692 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.114589 4713 csr.go:257] certificate signing request csr-8sjf2 is issued Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.115944 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.130897 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.148777 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.148812 4713 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.148824 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.167337 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.202726 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.450934 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.450991 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.451086 4713 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.451143 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:07.451128957 +0000 UTC m=+22.588146192 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.451265 4713 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.451445 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:07.451416096 +0000 UTC m=+22.588433351 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.552436 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.552632 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.552685 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:34:07.552639354 +0000 UTC m=+22.689656589 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.552802 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.552839 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.552871 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.552895 4713 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.553001 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:07.552969993 +0000 UTC m=+22.689987348 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.553108 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.553144 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.553173 4713 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:06 crc kubenswrapper[4713]: E0126 15:34:06.553252 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:07.55321605 +0000 UTC m=+22.690233285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.754257 4713 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47464->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.754348 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47464->192.168.126.11:17697: read: connection reset by peer" Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.759536 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:51:11.594952567 +0000 UTC Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.961845 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"eeb56cb42df81e5caf25919d8116096815a8e7937d0b43de31fbe6e812a02663"} Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.962914 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cebd87a8dd249d26f5d6f1a53ed714cafba46136172603771c72d85cd4732181"} Jan 26 15:34:06 crc kubenswrapper[4713]: I0126 15:34:06.965736 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ab0d3df7831914502ee72624d48d10a8d7bf26e5599bf91f3bcf507b6a58eabd"} Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.115601 4713 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 15:29:06 +0000 UTC, rotation deadline is 2026-11-09 04:40:02.993226512 +0000 UTC Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.115661 4713 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6877h5m55.877569884s for next certificate rotation Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.460608 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.460674 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.460842 4713 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.460846 4713 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.460930 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:09.460906779 +0000 UTC m=+24.597924004 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.460954 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:09.46094545 +0000 UTC m=+24.597962685 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.561758 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.561877 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.561986 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:34:09.561949872 +0000 UTC m=+24.698967107 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.562018 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.562038 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.562051 4713 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.562109 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:09.562093836 +0000 UTC m=+24.699111071 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.562111 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.562303 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.562325 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.562339 4713 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.562438 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:09.562419915 +0000 UTC m=+24.699437150 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.760300 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 05:39:36.16306011 +0000 UTC Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.803099 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.803184 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.803276 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.803334 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.803578 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:07 crc kubenswrapper[4713]: E0126 15:34:07.803711 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.807326 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.808035 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.808729 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.809351 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.809989 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.810501 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.811095 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.812925 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.813625 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.814729 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.869175 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.870477 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.871216 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.872012 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.872800 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.873555 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.875377 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.876100 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.877048 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.877814 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.878326 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.878964 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.879445 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.880124 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.880617 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.881224 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.881933 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.882425 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.883017 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.883540 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.884019 4713 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.884121 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.887927 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.888629 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.889287 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.890824 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.892100 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.892664 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.893552 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.895890 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.896990 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.897955 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.899702 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.900389 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.900895 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.901528 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.902408 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.903205 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.903755 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.904348 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.904870 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.905486 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.906134 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.906666 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.907195 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4ld7b"] Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.907589 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-fgqsv"] Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.907757 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.908134 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-tn7l2"] Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.908343 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2drw2"] Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.908631 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fgqsv" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.908817 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.909408 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-5gf9s"] Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.910049 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.910219 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.910447 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.911792 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.912300 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.912881 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.913494 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.913563 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.916291 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.916348 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.917115 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.917346 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.917419 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.917424 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.917482 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.917514 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.917533 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.917540 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.917418 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.917619 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.917678 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.917731 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.918232 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.918486 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.929233 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.947246 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.959333 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.969832 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-os-release\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.969878 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-run-multus-certs\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.969901 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-etc-kubernetes\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.969919 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-openvswitch\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.969944 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-log-socket\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.969966 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-cni-netd\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.969993 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/059cbb92-ce39-4fb3-8a36-0fb66e359701-cni-binary-copy\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970030 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-multus-socket-dir-parent\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970052 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-run-netns\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970074 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-var-lib-cni-bin\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970098 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-kubelet\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970206 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-systemd-units\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970235 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-systemd\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970306 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-ovn\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970464 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-multus-cni-dir\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970495 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bc036917-2d57-4b40-a5b1-21b68b1f3aab-hosts-file\") pod \"node-resolver-fgqsv\" (UID: \"bc036917-2d57-4b40-a5b1-21b68b1f3aab\") " pod="openshift-dns/node-resolver-fgqsv" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970556 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-hostroot\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970583 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970611 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovnkube-script-lib\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970636 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/059cbb92-ce39-4fb3-8a36-0fb66e359701-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970660 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-cnibin\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970689 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwjvj\" (UniqueName: \"kubernetes.io/projected/059cbb92-ce39-4fb3-8a36-0fb66e359701-kube-api-access-fwjvj\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970714 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-slash\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970738 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/059cbb92-ce39-4fb3-8a36-0fb66e359701-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970795 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970811 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8524\" (UniqueName: \"kubernetes.io/projected/d21f731c-7a63-4c3c-bdc5-9267197741d4-kube-api-access-k8524\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970855 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2hcx\" (UniqueName: \"kubernetes.io/projected/f608dd80-4cbf-4490-b062-2bef233d25d1-kube-api-access-w2hcx\") pod \"machine-config-daemon-tn7l2\" (UID: \"f608dd80-4cbf-4490-b062-2bef233d25d1\") " pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970875 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-system-cni-dir\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970898 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-run-k8s-cni-cncf-io\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970914 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovnkube-config\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.970930 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f608dd80-4cbf-4490-b062-2bef233d25d1-rootfs\") pod \"machine-config-daemon-tn7l2\" (UID: \"f608dd80-4cbf-4490-b062-2bef233d25d1\") " pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971002 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-node-log\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971020 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f608dd80-4cbf-4490-b062-2bef233d25d1-mcd-auth-proxy-config\") pod \"machine-config-daemon-tn7l2\" (UID: \"f608dd80-4cbf-4490-b062-2bef233d25d1\") " pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971038 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-var-lib-cni-multus\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971131 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-var-lib-openvswitch\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971171 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovn-node-metrics-cert\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971288 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d21f731c-7a63-4c3c-bdc5-9267197741d4-multus-daemon-config\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971314 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-run-netns\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971335 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/059cbb92-ce39-4fb3-8a36-0fb66e359701-system-cni-dir\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971379 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f608dd80-4cbf-4490-b062-2bef233d25d1-proxy-tls\") pod \"machine-config-daemon-tn7l2\" (UID: \"f608dd80-4cbf-4490-b062-2bef233d25d1\") " pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971423 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d21f731c-7a63-4c3c-bdc5-9267197741d4-cni-binary-copy\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971442 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-multus-conf-dir\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971457 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-etc-openvswitch\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971480 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-cni-bin\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971502 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/059cbb92-ce39-4fb3-8a36-0fb66e359701-cnibin\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971529 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/059cbb92-ce39-4fb3-8a36-0fb66e359701-os-release\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971553 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-env-overrides\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971649 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znkxj\" (UniqueName: \"kubernetes.io/projected/bc036917-2d57-4b40-a5b1-21b68b1f3aab-kube-api-access-znkxj\") pod \"node-resolver-fgqsv\" (UID: \"bc036917-2d57-4b40-a5b1-21b68b1f3aab\") " pod="openshift-dns/node-resolver-fgqsv" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971715 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmw7m\" (UniqueName: \"kubernetes.io/projected/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-kube-api-access-xmw7m\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971769 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-var-lib-kubelet\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.971797 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-run-ovn-kubernetes\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.973005 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.974201 4713 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22" exitCode=255 Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.974279 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22"} Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.976450 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e"} Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.978207 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7"} Jan 26 15:34:07 crc kubenswrapper[4713]: I0126 15:34:07.987744 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.007978 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.021175 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.036452 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.055381 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.067384 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.072556 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-os-release\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.072610 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-run-multus-certs\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.072635 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-etc-kubernetes\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.072671 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-openvswitch\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.072696 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-log-socket\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.072723 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-cni-netd\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.072747 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/059cbb92-ce39-4fb3-8a36-0fb66e359701-cni-binary-copy\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.072799 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-log-socket\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073564 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-systemd\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073621 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-systemd\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.072854 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-run-multus-certs\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073643 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-multus-socket-dir-parent\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.072856 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-etc-kubernetes\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.072794 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-openvswitch\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073670 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-run-netns\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073728 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-run-netns\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073730 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-var-lib-cni-bin\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.072813 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-cni-netd\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073759 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-kubelet\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073038 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-os-release\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073814 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-systemd-units\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073837 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-kubelet\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073846 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-ovn\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073844 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-var-lib-cni-bin\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073883 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-ovn\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073873 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-systemd-units\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073929 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-multus-socket-dir-parent\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.073914 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-multus-cni-dir\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.074033 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bc036917-2d57-4b40-a5b1-21b68b1f3aab-hosts-file\") pod \"node-resolver-fgqsv\" (UID: \"bc036917-2d57-4b40-a5b1-21b68b1f3aab\") " pod="openshift-dns/node-resolver-fgqsv" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.074060 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-cnibin\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.074073 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-multus-cni-dir\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.074085 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-hostroot\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.074119 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bc036917-2d57-4b40-a5b1-21b68b1f3aab-hosts-file\") pod \"node-resolver-fgqsv\" (UID: \"bc036917-2d57-4b40-a5b1-21b68b1f3aab\") " pod="openshift-dns/node-resolver-fgqsv" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.074120 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.074117 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/059cbb92-ce39-4fb3-8a36-0fb66e359701-cni-binary-copy\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.074153 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-cnibin\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.074178 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.074152 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovnkube-script-lib\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.074153 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-hostroot\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.074219 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/059cbb92-ce39-4fb3-8a36-0fb66e359701-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.074387 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-slash\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075048 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-slash\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075105 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwjvj\" (UniqueName: \"kubernetes.io/projected/059cbb92-ce39-4fb3-8a36-0fb66e359701-kube-api-access-fwjvj\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075132 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8524\" (UniqueName: \"kubernetes.io/projected/d21f731c-7a63-4c3c-bdc5-9267197741d4-kube-api-access-k8524\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075155 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/059cbb92-ce39-4fb3-8a36-0fb66e359701-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075179 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f608dd80-4cbf-4490-b062-2bef233d25d1-rootfs\") pod \"machine-config-daemon-tn7l2\" (UID: \"f608dd80-4cbf-4490-b062-2bef233d25d1\") " pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075199 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2hcx\" (UniqueName: \"kubernetes.io/projected/f608dd80-4cbf-4490-b062-2bef233d25d1-kube-api-access-w2hcx\") pod \"machine-config-daemon-tn7l2\" (UID: \"f608dd80-4cbf-4490-b062-2bef233d25d1\") " pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075221 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-system-cni-dir\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075246 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-run-k8s-cni-cncf-io\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075249 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f608dd80-4cbf-4490-b062-2bef233d25d1-rootfs\") pod \"machine-config-daemon-tn7l2\" (UID: \"f608dd80-4cbf-4490-b062-2bef233d25d1\") " pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075268 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovnkube-config\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075293 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-node-log\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075315 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f608dd80-4cbf-4490-b062-2bef233d25d1-mcd-auth-proxy-config\") pod \"machine-config-daemon-tn7l2\" (UID: \"f608dd80-4cbf-4490-b062-2bef233d25d1\") " pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075338 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-var-lib-cni-multus\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075375 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-var-lib-openvswitch\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075400 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovn-node-metrics-cert\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075399 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/059cbb92-ce39-4fb3-8a36-0fb66e359701-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075449 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-run-netns\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075466 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-var-lib-openvswitch\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075421 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-run-netns\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075316 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-system-cni-dir\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075518 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-var-lib-cni-multus\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075424 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-run-k8s-cni-cncf-io\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075433 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-node-log\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075565 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d21f731c-7a63-4c3c-bdc5-9267197741d4-multus-daemon-config\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075593 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/059cbb92-ce39-4fb3-8a36-0fb66e359701-system-cni-dir\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075619 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f608dd80-4cbf-4490-b062-2bef233d25d1-proxy-tls\") pod \"machine-config-daemon-tn7l2\" (UID: \"f608dd80-4cbf-4490-b062-2bef233d25d1\") " pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075643 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-cni-bin\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075664 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/059cbb92-ce39-4fb3-8a36-0fb66e359701-system-cni-dir\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075687 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d21f731c-7a63-4c3c-bdc5-9267197741d4-cni-binary-copy\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075716 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-multus-conf-dir\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075742 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-etc-openvswitch\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075770 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-env-overrides\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075797 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/059cbb92-ce39-4fb3-8a36-0fb66e359701-cnibin\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075828 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/059cbb92-ce39-4fb3-8a36-0fb66e359701-os-release\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075858 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znkxj\" (UniqueName: \"kubernetes.io/projected/bc036917-2d57-4b40-a5b1-21b68b1f3aab-kube-api-access-znkxj\") pod \"node-resolver-fgqsv\" (UID: \"bc036917-2d57-4b40-a5b1-21b68b1f3aab\") " pod="openshift-dns/node-resolver-fgqsv" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075875 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovnkube-config\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075893 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmw7m\" (UniqueName: \"kubernetes.io/projected/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-kube-api-access-xmw7m\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075934 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-var-lib-kubelet\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075965 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-cni-bin\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.076010 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-run-ovn-kubernetes\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.076068 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f608dd80-4cbf-4490-b062-2bef233d25d1-mcd-auth-proxy-config\") pod \"machine-config-daemon-tn7l2\" (UID: \"f608dd80-4cbf-4490-b062-2bef233d25d1\") " pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.076290 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/059cbb92-ce39-4fb3-8a36-0fb66e359701-os-release\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075965 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-run-ovn-kubernetes\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.076318 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d21f731c-7a63-4c3c-bdc5-9267197741d4-multus-daemon-config\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.076301 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-host-var-lib-kubelet\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.076274 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d21f731c-7a63-4c3c-bdc5-9267197741d4-multus-conf-dir\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.075936 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-etc-openvswitch\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.076352 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/059cbb92-ce39-4fb3-8a36-0fb66e359701-cnibin\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.076576 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d21f731c-7a63-4c3c-bdc5-9267197741d4-cni-binary-copy\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.076595 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-env-overrides\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.076686 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovnkube-script-lib\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.083894 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f608dd80-4cbf-4490-b062-2bef233d25d1-proxy-tls\") pod \"machine-config-daemon-tn7l2\" (UID: \"f608dd80-4cbf-4490-b062-2bef233d25d1\") " pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.083963 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovn-node-metrics-cert\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.093117 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2hcx\" (UniqueName: \"kubernetes.io/projected/f608dd80-4cbf-4490-b062-2bef233d25d1-kube-api-access-w2hcx\") pod \"machine-config-daemon-tn7l2\" (UID: \"f608dd80-4cbf-4490-b062-2bef233d25d1\") " pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.093302 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwjvj\" (UniqueName: \"kubernetes.io/projected/059cbb92-ce39-4fb3-8a36-0fb66e359701-kube-api-access-fwjvj\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.094734 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8524\" (UniqueName: \"kubernetes.io/projected/d21f731c-7a63-4c3c-bdc5-9267197741d4-kube-api-access-k8524\") pod \"multus-4ld7b\" (UID: \"d21f731c-7a63-4c3c-bdc5-9267197741d4\") " pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.095780 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znkxj\" (UniqueName: \"kubernetes.io/projected/bc036917-2d57-4b40-a5b1-21b68b1f3aab-kube-api-access-znkxj\") pod \"node-resolver-fgqsv\" (UID: \"bc036917-2d57-4b40-a5b1-21b68b1f3aab\") " pod="openshift-dns/node-resolver-fgqsv" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.096284 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmw7m\" (UniqueName: \"kubernetes.io/projected/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-kube-api-access-xmw7m\") pod \"ovnkube-node-2drw2\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.103570 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.113948 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.120911 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/059cbb92-ce39-4fb3-8a36-0fb66e359701-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5gf9s\" (UID: \"059cbb92-ce39-4fb3-8a36-0fb66e359701\") " pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.125634 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.130807 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.131168 4713 scope.go:117] "RemoveContainer" containerID="14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.137126 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.149143 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.168307 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.181058 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.192471 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.231683 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4ld7b" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.242265 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fgqsv" Jan 26 15:34:08 crc kubenswrapper[4713]: W0126 15:34:08.246284 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd21f731c_7a63_4c3c_bdc5_9267197741d4.slice/crio-c5335830c43e7d1f5993234ff7204f2419b9953b52ad55dc2701df24b2e2bc9c WatchSource:0}: Error finding container c5335830c43e7d1f5993234ff7204f2419b9953b52ad55dc2701df24b2e2bc9c: Status 404 returned error can't find the container with id c5335830c43e7d1f5993234ff7204f2419b9953b52ad55dc2701df24b2e2bc9c Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.248945 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.256020 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.261795 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:08 crc kubenswrapper[4713]: W0126 15:34:08.300160 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf608dd80_4cbf_4490_b062_2bef233d25d1.slice/crio-327f06be8cbcebf39306312d924b39dac6a577b62ce328240f157ab84c5024fb WatchSource:0}: Error finding container 327f06be8cbcebf39306312d924b39dac6a577b62ce328240f157ab84c5024fb: Status 404 returned error can't find the container with id 327f06be8cbcebf39306312d924b39dac6a577b62ce328240f157ab84c5024fb Jan 26 15:34:08 crc kubenswrapper[4713]: W0126 15:34:08.301302 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ba2d551_0768_4bac_9af5_bd6e7e58ce8c.slice/crio-2b66bf81676bede77b67f73ddef6eb873ce0e6fdaf418381db2441f7a2dac300 WatchSource:0}: Error finding container 2b66bf81676bede77b67f73ddef6eb873ce0e6fdaf418381db2441f7a2dac300: Status 404 returned error can't find the container with id 2b66bf81676bede77b67f73ddef6eb873ce0e6fdaf418381db2441f7a2dac300 Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.495822 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.503805 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.510126 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.510894 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.525426 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.538204 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.549434 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.561553 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.579185 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.592232 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.606500 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.615194 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.626134 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.637754 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.655345 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.670411 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.681256 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.694435 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.759770 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.761369 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:52:30.020447026 +0000 UTC Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.793696 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.805859 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.817725 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.829173 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.839592 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.850857 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.863279 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.876174 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.889077 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.983203 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerID="924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d" exitCode=0 Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.983317 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerDied","Data":"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d"} Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.983411 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerStarted","Data":"2b66bf81676bede77b67f73ddef6eb873ce0e6fdaf418381db2441f7a2dac300"} Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.985568 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" event={"ID":"059cbb92-ce39-4fb3-8a36-0fb66e359701","Type":"ContainerStarted","Data":"0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7"} Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.985629 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" event={"ID":"059cbb92-ce39-4fb3-8a36-0fb66e359701","Type":"ContainerStarted","Data":"48e305aa0567e58ec45153bd515fd26f8f2d4f014d42b64bd994a9540ef2c91b"} Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.989672 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c"} Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.989733 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"327f06be8cbcebf39306312d924b39dac6a577b62ce328240f157ab84c5024fb"} Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.991430 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fgqsv" event={"ID":"bc036917-2d57-4b40-a5b1-21b68b1f3aab","Type":"ContainerStarted","Data":"e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc"} Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.991458 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fgqsv" event={"ID":"bc036917-2d57-4b40-a5b1-21b68b1f3aab","Type":"ContainerStarted","Data":"9ffa8682ab289da42e1e57db7c9dab4b2ab43be4b82cab3c3f2ab97959db3dc7"} Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.992759 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4ld7b" event={"ID":"d21f731c-7a63-4c3c-bdc5-9267197741d4","Type":"ContainerStarted","Data":"5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79"} Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.992793 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4ld7b" event={"ID":"d21f731c-7a63-4c3c-bdc5-9267197741d4","Type":"ContainerStarted","Data":"c5335830c43e7d1f5993234ff7204f2419b9953b52ad55dc2701df24b2e2bc9c"} Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.996149 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.996539 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:08 crc kubenswrapper[4713]: I0126 15:34:08.999728 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5"} Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.000253 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.002643 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58"} Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.012814 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.029219 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.040356 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.050045 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.059630 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.071288 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.092102 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.101636 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.116127 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.125730 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.136179 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.145966 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.160325 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.174075 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.183595 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.190860 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.204966 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.218345 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.231900 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.245450 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.257232 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.272479 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.286248 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.299959 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.323211 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.491383 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.491425 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.491507 4713 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.491635 4713 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.491647 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:13.491626113 +0000 UTC m=+28.628643368 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.491776 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:13.491754266 +0000 UTC m=+28.628771501 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.592862 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.593063 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:34:13.593029156 +0000 UTC m=+28.730046391 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.593170 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.593200 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.593342 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.593379 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.593395 4713 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.593392 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.593420 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.593432 4713 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.593439 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:13.593431418 +0000 UTC m=+28.730448653 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.593497 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:13.593479959 +0000 UTC m=+28.730497194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.763015 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:29:51.012811693 +0000 UTC Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.804571 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.804606 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.804671 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.804796 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.804931 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:09 crc kubenswrapper[4713]: E0126 15:34:09.805073 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.826110 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.845955 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.846569 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.847578 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:09Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.868133 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:09Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.883191 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:09Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.896863 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:09Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.909163 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:09Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.930258 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:09Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.943759 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:09Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.969289 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:09Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:09 crc kubenswrapper[4713]: I0126 15:34:09.983976 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:09Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.000101 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:09Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.010491 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerStarted","Data":"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36"} Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.010554 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerStarted","Data":"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221"} Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.010572 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerStarted","Data":"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6"} Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.010585 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerStarted","Data":"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd"} Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.010598 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerStarted","Data":"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff"} Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.010611 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerStarted","Data":"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0"} Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.012329 4713 generic.go:334] "Generic (PLEG): container finished" podID="059cbb92-ce39-4fb3-8a36-0fb66e359701" containerID="0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7" exitCode=0 Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.012423 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" event={"ID":"059cbb92-ce39-4fb3-8a36-0fb66e359701","Type":"ContainerDied","Data":"0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7"} Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.015614 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2"} Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.018491 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.031545 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7"} Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.035314 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.066393 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.077181 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.091528 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-t2rqh"] Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.091964 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t2rqh" Jan 26 15:34:10 crc kubenswrapper[4713]: W0126 15:34:10.093931 4713 reflector.go:561] object-"openshift-image-registry"/"image-registry-certificates": failed to list *v1.ConfigMap: configmaps "image-registry-certificates" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 26 15:34:10 crc kubenswrapper[4713]: E0126 15:34:10.093984 4713 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-certificates\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-registry-certificates\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.094105 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.094935 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.095036 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.096306 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.097991 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/46e41399-4ca1-47ca-8151-b953f284e096-serviceca\") pod \"node-ca-t2rqh\" (UID: \"46e41399-4ca1-47ca-8151-b953f284e096\") " pod="openshift-image-registry/node-ca-t2rqh" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.098965 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46e41399-4ca1-47ca-8151-b953f284e096-host\") pod \"node-ca-t2rqh\" (UID: \"46e41399-4ca1-47ca-8151-b953f284e096\") " pod="openshift-image-registry/node-ca-t2rqh" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.099527 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9dt\" (UniqueName: \"kubernetes.io/projected/46e41399-4ca1-47ca-8151-b953f284e096-kube-api-access-4f9dt\") pod \"node-ca-t2rqh\" (UID: \"46e41399-4ca1-47ca-8151-b953f284e096\") " pod="openshift-image-registry/node-ca-t2rqh" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.167481 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.185268 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.201333 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f9dt\" (UniqueName: \"kubernetes.io/projected/46e41399-4ca1-47ca-8151-b953f284e096-kube-api-access-4f9dt\") pod \"node-ca-t2rqh\" (UID: \"46e41399-4ca1-47ca-8151-b953f284e096\") " pod="openshift-image-registry/node-ca-t2rqh" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.201460 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/46e41399-4ca1-47ca-8151-b953f284e096-serviceca\") pod \"node-ca-t2rqh\" (UID: \"46e41399-4ca1-47ca-8151-b953f284e096\") " pod="openshift-image-registry/node-ca-t2rqh" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.201499 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46e41399-4ca1-47ca-8151-b953f284e096-host\") pod \"node-ca-t2rqh\" (UID: \"46e41399-4ca1-47ca-8151-b953f284e096\") " pod="openshift-image-registry/node-ca-t2rqh" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.201595 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46e41399-4ca1-47ca-8151-b953f284e096-host\") pod \"node-ca-t2rqh\" (UID: \"46e41399-4ca1-47ca-8151-b953f284e096\") " pod="openshift-image-registry/node-ca-t2rqh" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.222815 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.253630 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f9dt\" (UniqueName: \"kubernetes.io/projected/46e41399-4ca1-47ca-8151-b953f284e096-kube-api-access-4f9dt\") pod \"node-ca-t2rqh\" (UID: \"46e41399-4ca1-47ca-8151-b953f284e096\") " pod="openshift-image-registry/node-ca-t2rqh" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.264077 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.292715 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.334479 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.367339 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.382993 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.396978 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.412720 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.431068 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.465886 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.508785 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.551803 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.579792 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.619202 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.661618 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.709454 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.742707 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.763386 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:11:43.057506622 +0000 UTC Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.778455 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.818857 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.860752 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.900009 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.939304 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:10 crc kubenswrapper[4713]: I0126 15:34:10.984169 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:11 crc kubenswrapper[4713]: I0126 15:34:11.021674 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:11Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:11 crc kubenswrapper[4713]: I0126 15:34:11.038431 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" event={"ID":"059cbb92-ce39-4fb3-8a36-0fb66e359701","Type":"ContainerStarted","Data":"9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d"} Jan 26 15:34:11 crc kubenswrapper[4713]: I0126 15:34:11.075989 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:11Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:11 crc kubenswrapper[4713]: E0126 15:34:11.202154 4713 configmap.go:193] Couldn't get configMap openshift-image-registry/image-registry-certificates: failed to sync configmap cache: timed out waiting for the condition Jan 26 15:34:11 crc kubenswrapper[4713]: E0126 15:34:11.202567 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/46e41399-4ca1-47ca-8151-b953f284e096-serviceca podName:46e41399-4ca1-47ca-8151-b953f284e096 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:11.70254151 +0000 UTC m=+26.839558745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serviceca" (UniqueName: "kubernetes.io/configmap/46e41399-4ca1-47ca-8151-b953f284e096-serviceca") pod "node-ca-t2rqh" (UID: "46e41399-4ca1-47ca-8151-b953f284e096") : failed to sync configmap cache: timed out waiting for the condition Jan 26 15:34:11 crc kubenswrapper[4713]: I0126 15:34:11.454973 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 15:34:11 crc kubenswrapper[4713]: I0126 15:34:11.716073 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/46e41399-4ca1-47ca-8151-b953f284e096-serviceca\") pod \"node-ca-t2rqh\" (UID: \"46e41399-4ca1-47ca-8151-b953f284e096\") " pod="openshift-image-registry/node-ca-t2rqh" Jan 26 15:34:11 crc kubenswrapper[4713]: I0126 15:34:11.717735 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/46e41399-4ca1-47ca-8151-b953f284e096-serviceca\") pod \"node-ca-t2rqh\" (UID: \"46e41399-4ca1-47ca-8151-b953f284e096\") " pod="openshift-image-registry/node-ca-t2rqh" Jan 26 15:34:11 crc kubenswrapper[4713]: I0126 15:34:11.764230 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:54:56.752533079 +0000 UTC Jan 26 15:34:11 crc kubenswrapper[4713]: I0126 15:34:11.802653 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:11 crc kubenswrapper[4713]: I0126 15:34:11.802735 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:11 crc kubenswrapper[4713]: E0126 15:34:11.802884 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:11 crc kubenswrapper[4713]: E0126 15:34:11.803086 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:11 crc kubenswrapper[4713]: I0126 15:34:11.803252 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:11 crc kubenswrapper[4713]: E0126 15:34:11.803353 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:11 crc kubenswrapper[4713]: I0126 15:34:11.901354 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t2rqh" Jan 26 15:34:11 crc kubenswrapper[4713]: W0126 15:34:11.918827 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46e41399_4ca1_47ca_8151_b953f284e096.slice/crio-75f20a050d39d4fafd2778d1a0f8c168c7897e8eac4e84f91f32e7b5e9ad17e9 WatchSource:0}: Error finding container 75f20a050d39d4fafd2778d1a0f8c168c7897e8eac4e84f91f32e7b5e9ad17e9: Status 404 returned error can't find the container with id 75f20a050d39d4fafd2778d1a0f8c168c7897e8eac4e84f91f32e7b5e9ad17e9 Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.042519 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t2rqh" event={"ID":"46e41399-4ca1-47ca-8151-b953f284e096","Type":"ContainerStarted","Data":"75f20a050d39d4fafd2778d1a0f8c168c7897e8eac4e84f91f32e7b5e9ad17e9"} Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.051927 4713 generic.go:334] "Generic (PLEG): container finished" podID="059cbb92-ce39-4fb3-8a36-0fb66e359701" containerID="9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d" exitCode=0 Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.052032 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" event={"ID":"059cbb92-ce39-4fb3-8a36-0fb66e359701","Type":"ContainerDied","Data":"9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d"} Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.067382 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.086476 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.105047 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.126382 4713 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.128722 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.128788 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.128801 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.128973 4713 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.132035 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.139765 4713 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.139985 4713 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.141588 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.141649 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.141664 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.141690 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.141704 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:12Z","lastTransitionTime":"2026-01-26T15:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.152373 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: E0126 15:34:12.162546 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.170922 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.170999 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.171020 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.171051 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.171075 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:12Z","lastTransitionTime":"2026-01-26T15:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.181569 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: E0126 15:34:12.186016 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.190592 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.190643 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.190656 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.190672 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.190683 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:12Z","lastTransitionTime":"2026-01-26T15:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.196563 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: E0126 15:34:12.202846 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.207097 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.207201 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.207301 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.207434 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.207517 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:12Z","lastTransitionTime":"2026-01-26T15:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.209058 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: E0126 15:34:12.221329 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.223846 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.225931 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.225995 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.226007 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.226027 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.226040 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:12Z","lastTransitionTime":"2026-01-26T15:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:12 crc kubenswrapper[4713]: E0126 15:34:12.239910 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: E0126 15:34:12.240034 4713 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.244258 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.244304 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.244314 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.244332 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.244343 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:12Z","lastTransitionTime":"2026-01-26T15:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.247798 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.263517 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.287317 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.304457 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.315599 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.329807 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.346805 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.346844 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.346857 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.346876 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.346888 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:12Z","lastTransitionTime":"2026-01-26T15:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.450457 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.450962 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.450984 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.451015 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.451036 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:12Z","lastTransitionTime":"2026-01-26T15:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.554554 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.554599 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.554610 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.554629 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.554645 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:12Z","lastTransitionTime":"2026-01-26T15:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.657116 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.657150 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.657159 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.657175 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.657186 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:12Z","lastTransitionTime":"2026-01-26T15:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.760113 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.760173 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.760189 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.760215 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.760229 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:12Z","lastTransitionTime":"2026-01-26T15:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.765303 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:23:38.299900936 +0000 UTC Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.863543 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.863636 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.863650 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.863666 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.863678 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:12Z","lastTransitionTime":"2026-01-26T15:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.966404 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.966462 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.966476 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.966497 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:12 crc kubenswrapper[4713]: I0126 15:34:12.966510 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:12Z","lastTransitionTime":"2026-01-26T15:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.063851 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerStarted","Data":"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c"} Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.065032 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t2rqh" event={"ID":"46e41399-4ca1-47ca-8151-b953f284e096","Type":"ContainerStarted","Data":"56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9"} Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.068798 4713 generic.go:334] "Generic (PLEG): container finished" podID="059cbb92-ce39-4fb3-8a36-0fb66e359701" containerID="2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b" exitCode=0 Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.068832 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" event={"ID":"059cbb92-ce39-4fb3-8a36-0fb66e359701","Type":"ContainerDied","Data":"2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b"} Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.069288 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.069346 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.069385 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.069411 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.069428 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:13Z","lastTransitionTime":"2026-01-26T15:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.082675 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.101885 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.118714 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.135883 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.159596 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.173227 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.173288 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.173303 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.173325 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.173341 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:13Z","lastTransitionTime":"2026-01-26T15:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.182670 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.193262 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.206655 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.218842 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.230712 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.247820 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.261634 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.276865 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.276904 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.276917 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.276935 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.276951 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:13Z","lastTransitionTime":"2026-01-26T15:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.276990 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.296485 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.309623 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.320440 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.335852 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.351494 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.364598 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.379792 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.379844 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.379859 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.379879 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.379895 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:13Z","lastTransitionTime":"2026-01-26T15:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.396674 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.411636 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.428032 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.451110 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.466231 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.481245 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.484153 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.484230 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.484248 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.484304 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.484323 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:13Z","lastTransitionTime":"2026-01-26T15:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.495833 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.509224 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.523294 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.537264 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.537580 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.537614 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.537695 4713 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.537753 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:21.537737812 +0000 UTC m=+36.674755037 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.537794 4713 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.537878 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:21.537855035 +0000 UTC m=+36.674872260 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.556606 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.587730 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.587780 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.587790 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.587812 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.587826 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:13Z","lastTransitionTime":"2026-01-26T15:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.638580 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.638740 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.638776 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.638981 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.639002 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.639016 4713 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.639021 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:34:21.638891378 +0000 UTC m=+36.775908653 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.639026 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.639228 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.639260 4713 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.639179 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:21.639142295 +0000 UTC m=+36.776159570 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.639405 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:21.639351541 +0000 UTC m=+36.776368816 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.691547 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.691598 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.691609 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.691628 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.691639 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:13Z","lastTransitionTime":"2026-01-26T15:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.766101 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:50:27.686413904 +0000 UTC Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.795736 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.795807 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.795830 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.795862 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.795887 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:13Z","lastTransitionTime":"2026-01-26T15:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.802725 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.802789 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.802736 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.802950 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.803123 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:13 crc kubenswrapper[4713]: E0126 15:34:13.803238 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.899025 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.899076 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.899085 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.899102 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:13 crc kubenswrapper[4713]: I0126 15:34:13.899112 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:13Z","lastTransitionTime":"2026-01-26T15:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.001997 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.002054 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.002068 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.002088 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.002105 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:14Z","lastTransitionTime":"2026-01-26T15:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.075504 4713 generic.go:334] "Generic (PLEG): container finished" podID="059cbb92-ce39-4fb3-8a36-0fb66e359701" containerID="61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38" exitCode=0 Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.075607 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" event={"ID":"059cbb92-ce39-4fb3-8a36-0fb66e359701","Type":"ContainerDied","Data":"61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38"} Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.091873 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.105078 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.106557 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.106592 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.106602 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.106621 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.106636 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:14Z","lastTransitionTime":"2026-01-26T15:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.115941 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.137384 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.151911 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.169810 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.189786 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.201829 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.209482 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.209532 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.209544 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.209564 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.209577 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:14Z","lastTransitionTime":"2026-01-26T15:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.222682 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.236512 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.250382 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.274217 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.290890 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.309048 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.311802 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.311828 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.311836 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.311850 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.311860 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:14Z","lastTransitionTime":"2026-01-26T15:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.320885 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.415098 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.415143 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.415153 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.415168 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.415181 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:14Z","lastTransitionTime":"2026-01-26T15:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.518285 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.518320 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.518329 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.518345 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.518355 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:14Z","lastTransitionTime":"2026-01-26T15:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.620722 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.620772 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.620789 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.620815 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.620832 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:14Z","lastTransitionTime":"2026-01-26T15:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.724335 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.724406 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.724420 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.724440 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.724454 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:14Z","lastTransitionTime":"2026-01-26T15:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.767015 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:27:43.793169975 +0000 UTC Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.827095 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.827146 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.827156 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.827172 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.827181 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:14Z","lastTransitionTime":"2026-01-26T15:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.929225 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.929267 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.929279 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.929296 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:14 crc kubenswrapper[4713]: I0126 15:34:14.929310 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:14Z","lastTransitionTime":"2026-01-26T15:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.032497 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.032570 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.032583 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.032603 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.032615 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:15Z","lastTransitionTime":"2026-01-26T15:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.100155 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerStarted","Data":"6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d"} Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.100535 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.100611 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.100621 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.107289 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" event={"ID":"059cbb92-ce39-4fb3-8a36-0fb66e359701","Type":"ContainerStarted","Data":"e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486"} Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.115052 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.130701 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.130934 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.131290 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.135491 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.135544 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.135559 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.135582 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.135596 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:15Z","lastTransitionTime":"2026-01-26T15:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.145201 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.159344 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.179543 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.194404 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.212668 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.234494 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.240620 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.240698 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.240730 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.240761 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.240784 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:15Z","lastTransitionTime":"2026-01-26T15:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.257990 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.280752 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.294736 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.310423 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.324175 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.339227 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.343588 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.343646 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.343657 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.343677 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.343690 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:15Z","lastTransitionTime":"2026-01-26T15:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.363016 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.378878 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.394609 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.410509 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.423293 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.444380 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.446167 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.446209 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.446222 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.446241 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.446253 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:15Z","lastTransitionTime":"2026-01-26T15:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.458039 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.473686 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.490283 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.506221 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.521678 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.535718 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.549091 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.549133 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.549143 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.549157 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.549170 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:15Z","lastTransitionTime":"2026-01-26T15:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.551910 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.569786 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.584413 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.606323 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.641764 4713 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.658400 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.658754 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.658766 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.658787 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.658799 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:15Z","lastTransitionTime":"2026-01-26T15:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.762729 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.762794 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.762810 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.762842 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.762858 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:15Z","lastTransitionTime":"2026-01-26T15:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.768075 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 08:01:55.978249774 +0000 UTC Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.803552 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.803639 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.803649 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:15 crc kubenswrapper[4713]: E0126 15:34:15.803769 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:15 crc kubenswrapper[4713]: E0126 15:34:15.803901 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:15 crc kubenswrapper[4713]: E0126 15:34:15.803998 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.825423 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.844275 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.858338 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.865282 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.865355 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.865396 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.865421 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.865438 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:15Z","lastTransitionTime":"2026-01-26T15:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.894997 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.911339 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.929537 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.944327 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.959200 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.970437 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.970485 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.970498 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.970516 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.970529 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:15Z","lastTransitionTime":"2026-01-26T15:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.983928 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:15 crc kubenswrapper[4713]: I0126 15:34:15.998008 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.012533 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.027287 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.041817 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.056125 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.066389 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.073180 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.073231 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.073241 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.073257 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.073268 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:16Z","lastTransitionTime":"2026-01-26T15:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.114123 4713 generic.go:334] "Generic (PLEG): container finished" podID="059cbb92-ce39-4fb3-8a36-0fb66e359701" containerID="e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486" exitCode=0 Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.114208 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" event={"ID":"059cbb92-ce39-4fb3-8a36-0fb66e359701","Type":"ContainerDied","Data":"e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486"} Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.127003 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.143345 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.160749 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.175559 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.175899 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.175980 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.176098 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.176214 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:16Z","lastTransitionTime":"2026-01-26T15:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.181063 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.197285 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.208507 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.218845 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.232157 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.242329 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.256871 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.270091 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.279467 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.279518 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.279530 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.279552 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.279566 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:16Z","lastTransitionTime":"2026-01-26T15:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.286853 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.307831 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.320114 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.332735 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.382069 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.382130 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.382146 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.382168 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.382184 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:16Z","lastTransitionTime":"2026-01-26T15:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.484808 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.484872 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.484890 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.484912 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.484933 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:16Z","lastTransitionTime":"2026-01-26T15:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.587466 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.587835 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.587936 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.588033 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.588131 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:16Z","lastTransitionTime":"2026-01-26T15:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.691420 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.691480 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.691498 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.691522 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.691540 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:16Z","lastTransitionTime":"2026-01-26T15:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.769089 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:19:48.146770911 +0000 UTC Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.794624 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.794674 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.794686 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.794705 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.794718 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:16Z","lastTransitionTime":"2026-01-26T15:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.897250 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.897291 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.897300 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.897313 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:16 crc kubenswrapper[4713]: I0126 15:34:16.897323 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:16Z","lastTransitionTime":"2026-01-26T15:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.000097 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.000179 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.000204 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.000240 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.000265 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:17Z","lastTransitionTime":"2026-01-26T15:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.103941 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.104000 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.104016 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.104040 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.104058 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:17Z","lastTransitionTime":"2026-01-26T15:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.124883 4713 generic.go:334] "Generic (PLEG): container finished" podID="059cbb92-ce39-4fb3-8a36-0fb66e359701" containerID="2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add" exitCode=0 Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.125008 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" event={"ID":"059cbb92-ce39-4fb3-8a36-0fb66e359701","Type":"ContainerDied","Data":"2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add"} Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.153932 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:17Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.180280 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:17Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.195945 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:17Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.207581 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.207634 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.207646 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.207666 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.207680 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:17Z","lastTransitionTime":"2026-01-26T15:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.220006 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:17Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.235451 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:17Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.250802 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:17Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.275073 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:17Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.292180 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:17Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.305031 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:17Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.309773 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.309804 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.309812 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.309826 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.309837 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:17Z","lastTransitionTime":"2026-01-26T15:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.319873 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:17Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.333179 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:17Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.355667 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:17Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.371091 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:17Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.384210 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:17Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.401560 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:17Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.412147 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.412189 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.412201 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.412217 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.412228 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:17Z","lastTransitionTime":"2026-01-26T15:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.515346 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.515408 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.515418 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.515434 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.515445 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:17Z","lastTransitionTime":"2026-01-26T15:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.618190 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.618229 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.618243 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.618261 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.618272 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:17Z","lastTransitionTime":"2026-01-26T15:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.720411 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.720462 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.720473 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.720489 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.720500 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:17Z","lastTransitionTime":"2026-01-26T15:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.769402 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 23:26:26.160995649 +0000 UTC Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.803025 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.803052 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.803022 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:17 crc kubenswrapper[4713]: E0126 15:34:17.803171 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:17 crc kubenswrapper[4713]: E0126 15:34:17.803475 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:17 crc kubenswrapper[4713]: E0126 15:34:17.803547 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.823038 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.823080 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.823091 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.823109 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.823123 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:17Z","lastTransitionTime":"2026-01-26T15:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.925923 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.925973 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.925983 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.926000 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:17 crc kubenswrapper[4713]: I0126 15:34:17.926011 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:17Z","lastTransitionTime":"2026-01-26T15:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.029042 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.029097 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.029106 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.029125 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.029136 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:18Z","lastTransitionTime":"2026-01-26T15:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.135583 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.135629 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.135641 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.135659 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.135671 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:18Z","lastTransitionTime":"2026-01-26T15:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.140588 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" event={"ID":"059cbb92-ce39-4fb3-8a36-0fb66e359701","Type":"ContainerStarted","Data":"be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7"} Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.156270 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:18Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.173950 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:18Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.189402 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:18Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.213773 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:18Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.232530 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:18Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.239189 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.239245 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.239264 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.239401 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.239441 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:18Z","lastTransitionTime":"2026-01-26T15:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.249784 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:18Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.264176 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:18Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.282999 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:18Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.298104 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:18Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.319050 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:18Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.338605 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:18Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.343847 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.343898 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.343907 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.343927 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.343941 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:18Z","lastTransitionTime":"2026-01-26T15:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.354290 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:18Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.369902 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:18Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.384622 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:18Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.406629 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:18Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.447960 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.448010 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.448020 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.448050 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.448064 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:18Z","lastTransitionTime":"2026-01-26T15:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.551427 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.551478 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.551492 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.551509 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.551518 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:18Z","lastTransitionTime":"2026-01-26T15:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.654999 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.655054 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.655065 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.655082 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.655096 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:18Z","lastTransitionTime":"2026-01-26T15:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.757836 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.757944 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.757962 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.757989 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.758010 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:18Z","lastTransitionTime":"2026-01-26T15:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.770145 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 15:04:24.720718904 +0000 UTC Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.861639 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.861701 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.861716 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.861737 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.861751 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:18Z","lastTransitionTime":"2026-01-26T15:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.964165 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.964534 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.964546 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.964565 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:18 crc kubenswrapper[4713]: I0126 15:34:18.964577 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:18Z","lastTransitionTime":"2026-01-26T15:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.105023 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.105083 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.105099 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.105125 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.105139 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:19Z","lastTransitionTime":"2026-01-26T15:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.208682 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.208750 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.208768 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.208794 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.208812 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:19Z","lastTransitionTime":"2026-01-26T15:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.311207 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.311256 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.311269 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.311292 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.311304 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:19Z","lastTransitionTime":"2026-01-26T15:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.414496 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.414531 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.414542 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.414559 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.414571 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:19Z","lastTransitionTime":"2026-01-26T15:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.516683 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.516729 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.516741 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.516758 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.516771 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:19Z","lastTransitionTime":"2026-01-26T15:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.621001 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.621056 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.621081 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.621112 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.621135 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:19Z","lastTransitionTime":"2026-01-26T15:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.723524 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.723563 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.723577 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.723596 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.723608 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:19Z","lastTransitionTime":"2026-01-26T15:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.770306 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 13:00:38.312620055 +0000 UTC Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.802966 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.803079 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:19 crc kubenswrapper[4713]: E0126 15:34:19.803150 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:19 crc kubenswrapper[4713]: E0126 15:34:19.803255 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.803417 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:19 crc kubenswrapper[4713]: E0126 15:34:19.803498 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.826114 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.826152 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.826161 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.826173 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.826183 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:19Z","lastTransitionTime":"2026-01-26T15:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.929519 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.929581 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.929593 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.929612 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.929629 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:19Z","lastTransitionTime":"2026-01-26T15:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.998819 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b"] Jan 26 15:34:19 crc kubenswrapper[4713]: I0126 15:34:19.999829 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.004022 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.005196 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.016521 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.018073 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v56q9\" (UniqueName: \"kubernetes.io/projected/83374550-b354-4961-8649-e679b13e36e2-kube-api-access-v56q9\") pod \"ovnkube-control-plane-749d76644c-92r8b\" (UID: \"83374550-b354-4961-8649-e679b13e36e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.018128 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/83374550-b354-4961-8649-e679b13e36e2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-92r8b\" (UID: \"83374550-b354-4961-8649-e679b13e36e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.018161 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/83374550-b354-4961-8649-e679b13e36e2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-92r8b\" (UID: \"83374550-b354-4961-8649-e679b13e36e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.018264 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/83374550-b354-4961-8649-e679b13e36e2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-92r8b\" (UID: \"83374550-b354-4961-8649-e679b13e36e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.032664 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.033044 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.033067 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.033078 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.033096 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.033110 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:20Z","lastTransitionTime":"2026-01-26T15:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.049206 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.079868 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.096272 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.115011 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.118883 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/83374550-b354-4961-8649-e679b13e36e2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-92r8b\" (UID: \"83374550-b354-4961-8649-e679b13e36e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.118950 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v56q9\" (UniqueName: \"kubernetes.io/projected/83374550-b354-4961-8649-e679b13e36e2-kube-api-access-v56q9\") pod \"ovnkube-control-plane-749d76644c-92r8b\" (UID: \"83374550-b354-4961-8649-e679b13e36e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.118989 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/83374550-b354-4961-8649-e679b13e36e2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-92r8b\" (UID: \"83374550-b354-4961-8649-e679b13e36e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.119045 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/83374550-b354-4961-8649-e679b13e36e2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-92r8b\" (UID: \"83374550-b354-4961-8649-e679b13e36e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.120201 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/83374550-b354-4961-8649-e679b13e36e2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-92r8b\" (UID: \"83374550-b354-4961-8649-e679b13e36e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.120334 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/83374550-b354-4961-8649-e679b13e36e2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-92r8b\" (UID: \"83374550-b354-4961-8649-e679b13e36e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.130582 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/83374550-b354-4961-8649-e679b13e36e2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-92r8b\" (UID: \"83374550-b354-4961-8649-e679b13e36e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.135461 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.137258 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.137298 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.137309 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.137326 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.137337 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:20Z","lastTransitionTime":"2026-01-26T15:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.140158 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v56q9\" (UniqueName: \"kubernetes.io/projected/83374550-b354-4961-8649-e679b13e36e2-kube-api-access-v56q9\") pod \"ovnkube-control-plane-749d76644c-92r8b\" (UID: \"83374550-b354-4961-8649-e679b13e36e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.150655 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/0.log" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.153137 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.154476 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerID="6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d" exitCode=1 Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.154524 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerDied","Data":"6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d"} Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.155277 4713 scope.go:117] "RemoveContainer" containerID="6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.168676 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.197387 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.217817 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.234930 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.239950 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.239992 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.240005 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.240024 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.240040 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:20Z","lastTransitionTime":"2026-01-26T15:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.277017 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.296932 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.317269 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.320454 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.335515 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.342204 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.342235 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.342243 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.342261 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.342271 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:20Z","lastTransitionTime":"2026-01-26T15:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.355526 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.384435 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"message\\\":\\\" 5948 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:19.559079 5948 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:19.559157 5948 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 15:34:19.559197 5948 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 15:34:19.559220 5948 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 15:34:19.559281 5948 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:19.559295 5948 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:19.559304 5948 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:19.559321 5948 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:19.559295 5948 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 15:34:19.559335 5948 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:19.559348 5948 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:19.559385 5948 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:19.559399 5948 factory.go:656] Stopping watch factory\\\\nI0126 15:34:19.559420 5948 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:19.559423 5948 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.402561 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.424662 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.442126 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.445534 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.445615 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.445640 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.445668 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.445690 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:20Z","lastTransitionTime":"2026-01-26T15:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.462294 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.488832 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.506150 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.524754 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.542624 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.548251 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.548299 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.548317 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.548343 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.548386 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:20Z","lastTransitionTime":"2026-01-26T15:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.562825 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.587790 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.607239 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.624341 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.638789 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.650967 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.651033 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.651052 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.651078 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.651097 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:20Z","lastTransitionTime":"2026-01-26T15:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.669823 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.754221 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.754667 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.754990 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.755225 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.755458 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:20Z","lastTransitionTime":"2026-01-26T15:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.771345 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 15:00:21.631778964 +0000 UTC Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.859544 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.859626 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.859646 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.859674 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.859700 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:20Z","lastTransitionTime":"2026-01-26T15:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.962641 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.962689 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.962699 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.962717 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:20 crc kubenswrapper[4713]: I0126 15:34:20.962728 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:20Z","lastTransitionTime":"2026-01-26T15:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.065666 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.065716 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.065729 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.065751 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.065773 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:21Z","lastTransitionTime":"2026-01-26T15:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.159999 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" event={"ID":"83374550-b354-4961-8649-e679b13e36e2","Type":"ContainerStarted","Data":"d642c34102e77ea0b225efd82c4202685bbcaa99140c0a3a4622328e6b9aeeec"} Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.169304 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.169495 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.169593 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.169728 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.169823 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:21Z","lastTransitionTime":"2026-01-26T15:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.273605 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.274111 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.274123 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.274152 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.274164 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:21Z","lastTransitionTime":"2026-01-26T15:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.377765 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.377821 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.377834 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.377853 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.377865 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:21Z","lastTransitionTime":"2026-01-26T15:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.481145 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.481199 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.481210 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.481229 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.481244 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:21Z","lastTransitionTime":"2026-01-26T15:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.516653 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-4vgps"] Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.517262 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.517333 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.530491 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.535099 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs\") pod \"network-metrics-daemon-4vgps\" (UID: \"6f185439-f527-44bf-8362-a9cf40e00d3c\") " pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.535224 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2q5b\" (UniqueName: \"kubernetes.io/projected/6f185439-f527-44bf-8362-a9cf40e00d3c-kube-api-access-s2q5b\") pod \"network-metrics-daemon-4vgps\" (UID: \"6f185439-f527-44bf-8362-a9cf40e00d3c\") " pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.552606 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.567378 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.583908 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.583943 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.583954 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.583970 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.584030 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:21Z","lastTransitionTime":"2026-01-26T15:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.587464 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.606895 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.625592 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.636411 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs\") pod \"network-metrics-daemon-4vgps\" (UID: \"6f185439-f527-44bf-8362-a9cf40e00d3c\") " pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.636498 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2q5b\" (UniqueName: \"kubernetes.io/projected/6f185439-f527-44bf-8362-a9cf40e00d3c-kube-api-access-s2q5b\") pod \"network-metrics-daemon-4vgps\" (UID: \"6f185439-f527-44bf-8362-a9cf40e00d3c\") " pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.636543 4713 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.636568 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.636623 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.636644 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs podName:6f185439-f527-44bf-8362-a9cf40e00d3c nodeName:}" failed. No retries permitted until 2026-01-26 15:34:22.13661772 +0000 UTC m=+37.273634955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs") pod "network-metrics-daemon-4vgps" (UID: "6f185439-f527-44bf-8362-a9cf40e00d3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.636807 4713 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.636897 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:37.636869877 +0000 UTC m=+52.773887152 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.637019 4713 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.637065 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:37.637052122 +0000 UTC m=+52.774069397 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.640754 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.654950 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2q5b\" (UniqueName: \"kubernetes.io/projected/6f185439-f527-44bf-8362-a9cf40e00d3c-kube-api-access-s2q5b\") pod \"network-metrics-daemon-4vgps\" (UID: \"6f185439-f527-44bf-8362-a9cf40e00d3c\") " pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.655681 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.679216 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.686714 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.686762 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.686782 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.686810 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.686827 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:21Z","lastTransitionTime":"2026-01-26T15:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.698790 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.712553 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.733794 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"message\\\":\\\" 5948 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:19.559079 5948 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:19.559157 5948 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 15:34:19.559197 5948 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 15:34:19.559220 5948 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 15:34:19.559281 5948 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:19.559295 5948 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:19.559304 5948 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:19.559321 5948 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:19.559295 5948 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 15:34:19.559335 5948 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:19.559348 5948 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:19.559385 5948 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:19.559399 5948 factory.go:656] Stopping watch factory\\\\nI0126 15:34:19.559420 5948 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:19.559423 5948 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.737473 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.737833 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.737878 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.738008 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:34:37.737973972 +0000 UTC m=+52.874991237 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.738011 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.738099 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.738124 4713 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.738182 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:37.738167797 +0000 UTC m=+52.875185062 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.738012 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.738393 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.738426 4713 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.738528 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:37.738503227 +0000 UTC m=+52.875520452 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.748330 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.766168 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.772835 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 22:46:33.493938401 +0000 UTC Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.782461 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.790087 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.790129 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.790140 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.790159 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.790171 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:21Z","lastTransitionTime":"2026-01-26T15:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.802124 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.802815 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.802896 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.802962 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.803041 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.803151 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:21 crc kubenswrapper[4713]: E0126 15:34:21.803304 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.819745 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:21Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.894448 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.894518 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.894539 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.894573 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.894601 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:21Z","lastTransitionTime":"2026-01-26T15:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.996605 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.996640 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.996649 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.996664 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:21 crc kubenswrapper[4713]: I0126 15:34:21.996673 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:21Z","lastTransitionTime":"2026-01-26T15:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.099643 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.099998 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.100165 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.100297 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.100437 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:22Z","lastTransitionTime":"2026-01-26T15:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.144132 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs\") pod \"network-metrics-daemon-4vgps\" (UID: \"6f185439-f527-44bf-8362-a9cf40e00d3c\") " pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:22 crc kubenswrapper[4713]: E0126 15:34:22.144430 4713 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:22 crc kubenswrapper[4713]: E0126 15:34:22.144709 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs podName:6f185439-f527-44bf-8362-a9cf40e00d3c nodeName:}" failed. No retries permitted until 2026-01-26 15:34:23.144676386 +0000 UTC m=+38.281693621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs") pod "network-metrics-daemon-4vgps" (UID: "6f185439-f527-44bf-8362-a9cf40e00d3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.165293 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/0.log" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.168413 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerStarted","Data":"95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.169067 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.170439 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" event={"ID":"83374550-b354-4961-8649-e679b13e36e2","Type":"ContainerStarted","Data":"8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.170472 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" event={"ID":"83374550-b354-4961-8649-e679b13e36e2","Type":"ContainerStarted","Data":"ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.188568 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.203207 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.203546 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.203564 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.203573 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.203592 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.203603 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:22Z","lastTransitionTime":"2026-01-26T15:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.221613 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.238217 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.263976 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.281504 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.300878 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.306926 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.306962 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.306973 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.306988 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.306999 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:22Z","lastTransitionTime":"2026-01-26T15:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.323210 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.336562 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.348208 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.366411 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"message\\\":\\\" 5948 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:19.559079 5948 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:19.559157 5948 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 15:34:19.559197 5948 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 15:34:19.559220 5948 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 15:34:19.559281 5948 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:19.559295 5948 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:19.559304 5948 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:19.559321 5948 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:19.559295 5948 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 15:34:19.559335 5948 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:19.559348 5948 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:19.559385 5948 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:19.559399 5948 factory.go:656] Stopping watch factory\\\\nI0126 15:34:19.559420 5948 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:19.559423 5948 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.383263 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.396811 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.409630 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.409934 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.409998 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.410069 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.410176 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:22Z","lastTransitionTime":"2026-01-26T15:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.411665 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.426158 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.439814 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.455872 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.475440 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.491118 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.504819 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.512622 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.512712 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.512727 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.512747 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.512763 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:22Z","lastTransitionTime":"2026-01-26T15:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.522607 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.537970 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.550673 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.568456 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.568498 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.568508 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.568525 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.568536 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:22Z","lastTransitionTime":"2026-01-26T15:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.570666 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"message\\\":\\\" 5948 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:19.559079 5948 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:19.559157 5948 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 15:34:19.559197 5948 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 15:34:19.559220 5948 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 15:34:19.559281 5948 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:19.559295 5948 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:19.559304 5948 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:19.559321 5948 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:19.559295 5948 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 15:34:19.559335 5948 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:19.559348 5948 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:19.559385 5948 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:19.559399 5948 factory.go:656] Stopping watch factory\\\\nI0126 15:34:19.559420 5948 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:19.559423 5948 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: E0126 15:34:22.580632 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.582003 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.584871 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.584903 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.584912 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.584932 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.584946 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:22Z","lastTransitionTime":"2026-01-26T15:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.595167 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: E0126 15:34:22.596344 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.599778 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.599806 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.599817 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.599833 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.599845 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:22Z","lastTransitionTime":"2026-01-26T15:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.610409 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: E0126 15:34:22.611722 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.614932 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.614974 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.614986 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.615005 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.615019 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:22Z","lastTransitionTime":"2026-01-26T15:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.631694 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: E0126 15:34:22.631781 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.635582 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.635616 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.635629 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.635646 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.635660 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:22Z","lastTransitionTime":"2026-01-26T15:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.644103 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: E0126 15:34:22.649579 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: E0126 15:34:22.649747 4713 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.651660 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.651713 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.651726 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.651749 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.651764 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:22Z","lastTransitionTime":"2026-01-26T15:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.662605 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.676017 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.689401 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.706524 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.717231 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.753941 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.754003 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.754021 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.754038 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.754052 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:22Z","lastTransitionTime":"2026-01-26T15:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.809028 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.809022 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:49:53.840870467 +0000 UTC Jan 26 15:34:22 crc kubenswrapper[4713]: E0126 15:34:22.809145 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.856614 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.856712 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.856739 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.856774 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.856798 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:22Z","lastTransitionTime":"2026-01-26T15:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.960296 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.960403 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.960431 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.960463 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:22 crc kubenswrapper[4713]: I0126 15:34:22.960486 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:22Z","lastTransitionTime":"2026-01-26T15:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.069069 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.069178 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.069210 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.069263 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.069294 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:23Z","lastTransitionTime":"2026-01-26T15:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.155916 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs\") pod \"network-metrics-daemon-4vgps\" (UID: \"6f185439-f527-44bf-8362-a9cf40e00d3c\") " pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:23 crc kubenswrapper[4713]: E0126 15:34:23.156299 4713 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:23 crc kubenswrapper[4713]: E0126 15:34:23.156431 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs podName:6f185439-f527-44bf-8362-a9cf40e00d3c nodeName:}" failed. No retries permitted until 2026-01-26 15:34:25.156401595 +0000 UTC m=+40.293418870 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs") pod "network-metrics-daemon-4vgps" (UID: "6f185439-f527-44bf-8362-a9cf40e00d3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.172876 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.172923 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.172934 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.172950 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.172963 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:23Z","lastTransitionTime":"2026-01-26T15:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.178409 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/1.log" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.179691 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/0.log" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.185100 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerID="95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a" exitCode=1 Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.185214 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerDied","Data":"95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a"} Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.185323 4713 scope.go:117] "RemoveContainer" containerID="6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.186450 4713 scope.go:117] "RemoveContainer" containerID="95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a" Jan 26 15:34:23 crc kubenswrapper[4713]: E0126 15:34:23.186666 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.217151 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.235068 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.251791 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.277891 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.278568 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.278611 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.278641 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.278660 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:23Z","lastTransitionTime":"2026-01-26T15:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.279012 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc222c0f47c252da792a54960f6e1d60c0ee386f189de14a5f9a8fa644edd2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"message\\\":\\\" 5948 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:19.559079 5948 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:19.559157 5948 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 15:34:19.559197 5948 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 15:34:19.559220 5948 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 15:34:19.559281 5948 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:19.559295 5948 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:19.559304 5948 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:19.559321 5948 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:19.559295 5948 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 15:34:19.559335 5948 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:19.559348 5948 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:19.559385 5948 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:19.559399 5948 factory.go:656] Stopping watch factory\\\\nI0126 15:34:19.559420 5948 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:19.559423 5948 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\" 6155 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 15:34:22.532844 6155 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533176 6155 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533684 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533936 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 15:34:22.533959 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 15:34:22.533986 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:22.533998 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:22.534024 6155 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534125 6155 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534338 6155 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534831 6155 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.296171 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.311741 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.329267 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.342236 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.354989 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.376492 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.381106 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.381132 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.381141 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.381154 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.381164 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:23Z","lastTransitionTime":"2026-01-26T15:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.388753 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.399290 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.412541 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.423473 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.435625 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.447766 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.456022 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:23Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.484071 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.484108 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.484117 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.484130 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.484142 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:23Z","lastTransitionTime":"2026-01-26T15:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.587246 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.587309 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.587327 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.587352 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.587403 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:23Z","lastTransitionTime":"2026-01-26T15:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.691714 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.691805 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.691831 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.691867 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.691892 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:23Z","lastTransitionTime":"2026-01-26T15:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.795474 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.795571 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.795601 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.795631 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.795651 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:23Z","lastTransitionTime":"2026-01-26T15:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.802961 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.803007 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.802989 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:23 crc kubenswrapper[4713]: E0126 15:34:23.803169 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:23 crc kubenswrapper[4713]: E0126 15:34:23.803285 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:23 crc kubenswrapper[4713]: E0126 15:34:23.803348 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.809632 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 11:54:14.556687551 +0000 UTC Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.898615 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.898665 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.898676 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.898697 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:23 crc kubenswrapper[4713]: I0126 15:34:23.898710 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:23Z","lastTransitionTime":"2026-01-26T15:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.003035 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.003101 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.003116 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.003138 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.003157 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:24Z","lastTransitionTime":"2026-01-26T15:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.113298 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.113410 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.113460 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.113486 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.113502 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:24Z","lastTransitionTime":"2026-01-26T15:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.191557 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/1.log" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.198427 4713 scope.go:117] "RemoveContainer" containerID="95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a" Jan 26 15:34:24 crc kubenswrapper[4713]: E0126 15:34:24.198856 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.214962 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.217443 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.217594 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.217618 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.217643 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.217666 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:24Z","lastTransitionTime":"2026-01-26T15:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.234617 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.252730 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.272627 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.291701 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.307924 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.321113 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.321189 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.321206 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.321229 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.321265 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:24Z","lastTransitionTime":"2026-01-26T15:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.331471 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.350676 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.368767 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.396192 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.418897 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.424314 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.424431 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.424447 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.424494 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.424510 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:24Z","lastTransitionTime":"2026-01-26T15:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.439596 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.454570 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.474144 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.491921 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.507107 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.527761 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.527801 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.527810 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.527829 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.527847 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:24Z","lastTransitionTime":"2026-01-26T15:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.536256 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\" 6155 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 15:34:22.532844 6155 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533176 6155 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533684 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533936 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 15:34:22.533959 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 15:34:22.533986 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:22.533998 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:22.534024 6155 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534125 6155 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534338 6155 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534831 6155 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.631597 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.631649 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.631663 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.631683 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.631697 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:24Z","lastTransitionTime":"2026-01-26T15:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.735225 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.735306 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.735323 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.735349 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.735397 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:24Z","lastTransitionTime":"2026-01-26T15:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.803098 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:24 crc kubenswrapper[4713]: E0126 15:34:24.803356 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.810151 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:18:03.629527559 +0000 UTC Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.839046 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.839117 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.839136 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.839165 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.839189 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:24Z","lastTransitionTime":"2026-01-26T15:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.941977 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.942043 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.942060 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.942082 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:24 crc kubenswrapper[4713]: I0126 15:34:24.942100 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:24Z","lastTransitionTime":"2026-01-26T15:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.045447 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.045506 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.045523 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.045547 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.045566 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:25Z","lastTransitionTime":"2026-01-26T15:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.148649 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.148704 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.148717 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.148750 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.148762 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:25Z","lastTransitionTime":"2026-01-26T15:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.180876 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs\") pod \"network-metrics-daemon-4vgps\" (UID: \"6f185439-f527-44bf-8362-a9cf40e00d3c\") " pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:25 crc kubenswrapper[4713]: E0126 15:34:25.181079 4713 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:25 crc kubenswrapper[4713]: E0126 15:34:25.181208 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs podName:6f185439-f527-44bf-8362-a9cf40e00d3c nodeName:}" failed. No retries permitted until 2026-01-26 15:34:29.181170048 +0000 UTC m=+44.318187323 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs") pod "network-metrics-daemon-4vgps" (UID: "6f185439-f527-44bf-8362-a9cf40e00d3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.251359 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.251477 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.251505 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.251534 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.251553 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:25Z","lastTransitionTime":"2026-01-26T15:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.354003 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.354132 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.354145 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.354162 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.354176 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:25Z","lastTransitionTime":"2026-01-26T15:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.457641 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.457675 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.457685 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.457701 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.457713 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:25Z","lastTransitionTime":"2026-01-26T15:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.560821 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.560908 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.560933 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.560967 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.560989 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:25Z","lastTransitionTime":"2026-01-26T15:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.664835 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.664910 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.664928 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.664953 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.664973 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:25Z","lastTransitionTime":"2026-01-26T15:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.768684 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.768779 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.768805 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.768838 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.768862 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:25Z","lastTransitionTime":"2026-01-26T15:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.802738 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.802850 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.802904 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:25 crc kubenswrapper[4713]: E0126 15:34:25.803202 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:25 crc kubenswrapper[4713]: E0126 15:34:25.803419 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:25 crc kubenswrapper[4713]: E0126 15:34:25.803646 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.810845 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:26:16.509410947 +0000 UTC Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.822974 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.849632 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.868175 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.873082 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.873152 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.873166 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.873186 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.873198 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:25Z","lastTransitionTime":"2026-01-26T15:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.893076 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.922005 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.945661 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.966849 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.976419 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.976487 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.976519 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.976554 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.976581 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:25Z","lastTransitionTime":"2026-01-26T15:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:25 crc kubenswrapper[4713]: I0126 15:34:25.983883 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.003662 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.022436 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.036093 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.068107 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\" 6155 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 15:34:22.532844 6155 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533176 6155 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533684 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533936 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 15:34:22.533959 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 15:34:22.533986 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:22.533998 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:22.534024 6155 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534125 6155 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534338 6155 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534831 6155 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.080908 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.082681 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.082722 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.082734 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.082753 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.082769 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:26Z","lastTransitionTime":"2026-01-26T15:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.099087 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.099562 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.114760 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.128337 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.142022 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.157465 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.169728 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.179627 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.184809 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.184854 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.184867 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.184885 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.184900 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:26Z","lastTransitionTime":"2026-01-26T15:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.190641 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.205702 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.217902 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.241145 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\" 6155 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 15:34:22.532844 6155 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533176 6155 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533684 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533936 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 15:34:22.533959 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 15:34:22.533986 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:22.533998 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:22.534024 6155 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534125 6155 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534338 6155 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534831 6155 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.255538 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.270850 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.284846 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.287448 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.287599 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.287712 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.287851 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.288560 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:26Z","lastTransitionTime":"2026-01-26T15:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.297574 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.311270 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.333183 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.348068 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.362881 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.377902 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.390833 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.391732 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.391775 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.391789 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.391808 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.391820 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:26Z","lastTransitionTime":"2026-01-26T15:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.494625 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.494668 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.494679 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.494702 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.494717 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:26Z","lastTransitionTime":"2026-01-26T15:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.597975 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.598030 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.598049 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.598075 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.598092 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:26Z","lastTransitionTime":"2026-01-26T15:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.701685 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.701766 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.701779 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.701798 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.702044 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:26Z","lastTransitionTime":"2026-01-26T15:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.802521 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:26 crc kubenswrapper[4713]: E0126 15:34:26.802692 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.804559 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.804733 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.804867 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.805001 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.805126 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:26Z","lastTransitionTime":"2026-01-26T15:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.811142 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 23:18:27.089265918 +0000 UTC Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.908475 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.908543 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.908561 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.908586 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:26 crc kubenswrapper[4713]: I0126 15:34:26.908604 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:26Z","lastTransitionTime":"2026-01-26T15:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.012108 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.012183 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.012201 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.012225 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.012242 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:27Z","lastTransitionTime":"2026-01-26T15:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.116252 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.116326 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.116347 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.116401 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.116419 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:27Z","lastTransitionTime":"2026-01-26T15:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.219578 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.219632 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.219644 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.219674 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.219689 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:27Z","lastTransitionTime":"2026-01-26T15:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.322049 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.322413 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.322598 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.322779 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.323034 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:27Z","lastTransitionTime":"2026-01-26T15:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.426495 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.426547 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.426564 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.426587 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.426604 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:27Z","lastTransitionTime":"2026-01-26T15:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.529436 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.529484 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.529535 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.529560 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.529581 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:27Z","lastTransitionTime":"2026-01-26T15:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.632243 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.632700 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.632931 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.633140 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.633478 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:27Z","lastTransitionTime":"2026-01-26T15:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.736403 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.736437 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.736446 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.736460 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.736470 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:27Z","lastTransitionTime":"2026-01-26T15:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.802811 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.802810 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.802943 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:27 crc kubenswrapper[4713]: E0126 15:34:27.803105 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:27 crc kubenswrapper[4713]: E0126 15:34:27.803614 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:27 crc kubenswrapper[4713]: E0126 15:34:27.803679 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.811810 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:36:03.280078964 +0000 UTC Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.839826 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.839876 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.839885 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.839906 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.839918 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:27Z","lastTransitionTime":"2026-01-26T15:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.942481 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.942525 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.942542 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.942561 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:27 crc kubenswrapper[4713]: I0126 15:34:27.942584 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:27Z","lastTransitionTime":"2026-01-26T15:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.044596 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.044626 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.044634 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.044648 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.044657 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:28Z","lastTransitionTime":"2026-01-26T15:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.146752 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.146794 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.146804 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.146821 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.146836 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:28Z","lastTransitionTime":"2026-01-26T15:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.249026 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.249073 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.249085 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.249105 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.249116 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:28Z","lastTransitionTime":"2026-01-26T15:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.352847 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.352914 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.352928 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.352952 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.352964 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:28Z","lastTransitionTime":"2026-01-26T15:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.455189 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.455216 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.455224 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.455237 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.455249 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:28Z","lastTransitionTime":"2026-01-26T15:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.557928 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.557971 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.557984 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.558005 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.558016 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:28Z","lastTransitionTime":"2026-01-26T15:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.661695 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.661754 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.661775 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.661804 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.661821 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:28Z","lastTransitionTime":"2026-01-26T15:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.765089 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.765142 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.765155 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.765169 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.765180 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:28Z","lastTransitionTime":"2026-01-26T15:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.802899 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:28 crc kubenswrapper[4713]: E0126 15:34:28.803178 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.813031 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:24:14.00032064 +0000 UTC Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.867830 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.867896 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.867916 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.867942 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.867962 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:28Z","lastTransitionTime":"2026-01-26T15:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.970286 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.970330 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.970344 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.970381 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:28 crc kubenswrapper[4713]: I0126 15:34:28.970394 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:28Z","lastTransitionTime":"2026-01-26T15:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.073191 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.073223 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.073233 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.073248 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.073257 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.176009 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.176079 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.176096 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.176123 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.176143 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.223023 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs\") pod \"network-metrics-daemon-4vgps\" (UID: \"6f185439-f527-44bf-8362-a9cf40e00d3c\") " pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:29 crc kubenswrapper[4713]: E0126 15:34:29.223256 4713 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:29 crc kubenswrapper[4713]: E0126 15:34:29.223335 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs podName:6f185439-f527-44bf-8362-a9cf40e00d3c nodeName:}" failed. No retries permitted until 2026-01-26 15:34:37.223309494 +0000 UTC m=+52.360326759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs") pod "network-metrics-daemon-4vgps" (UID: "6f185439-f527-44bf-8362-a9cf40e00d3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.278344 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.278417 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.278430 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.278452 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.278464 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.381861 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.381930 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.381943 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.381962 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.381976 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.484941 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.484991 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.485001 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.485017 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.485027 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.587964 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.588035 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.588052 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.588108 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.588125 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.691597 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.691980 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.692166 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.692321 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.692496 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.796269 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.796334 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.796349 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.796392 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.796408 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.802630 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.802748 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.802656 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:29 crc kubenswrapper[4713]: E0126 15:34:29.802849 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:29 crc kubenswrapper[4713]: E0126 15:34:29.803233 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:29 crc kubenswrapper[4713]: E0126 15:34:29.803305 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.813840 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:32:28.656465483 +0000 UTC Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.900442 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.900500 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.900513 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.900534 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4713]: I0126 15:34:29.900549 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.004025 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.004098 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.004117 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.004149 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.004168 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.107528 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.107617 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.107651 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.107676 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.107697 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.209980 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.210058 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.210073 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.210093 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.210106 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.312905 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.312958 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.312969 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.312986 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.312997 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.416847 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.417401 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.417577 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.417774 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.417933 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.521224 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.521271 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.521283 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.521302 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.521317 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.624907 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.625025 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.625050 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.625081 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.625103 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.727930 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.727978 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.727991 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.728013 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.728028 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.803047 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:30 crc kubenswrapper[4713]: E0126 15:34:30.803261 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.830445 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:30:54.254597791 +0000 UTC Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.833289 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.833345 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.833388 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.833412 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.833429 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.936121 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.936260 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.936292 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.936317 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4713]: I0126 15:34:30.936337 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.039355 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.039447 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.039464 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.039488 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.039509 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.142552 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.142605 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.142620 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.142642 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.142656 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.244937 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.244986 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.244997 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.245021 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.245065 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.347670 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.347742 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.347756 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.347781 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.347797 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.450943 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.451014 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.451039 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.451071 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.451097 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.553918 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.553983 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.554000 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.554024 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.554045 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.657133 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.657213 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.657240 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.657271 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.657295 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.759868 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.759906 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.759918 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.759933 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.759945 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.803031 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.803130 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.803227 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:31 crc kubenswrapper[4713]: E0126 15:34:31.805173 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:31 crc kubenswrapper[4713]: E0126 15:34:31.805302 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:31 crc kubenswrapper[4713]: E0126 15:34:31.805019 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.831490 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:45:19.048132101 +0000 UTC Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.862785 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.862838 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.862849 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.862867 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.862884 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.965725 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.965778 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.965793 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.965814 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4713]: I0126 15:34:31.965829 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.068596 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.068650 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.068668 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.068689 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.068700 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.171381 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.171435 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.171449 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.171469 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.171484 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.274195 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.274250 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.274264 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.274285 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.274299 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.377649 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.377724 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.377742 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.377772 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.377791 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.480817 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.480864 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.480886 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.480910 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.480928 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.584599 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.584703 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.584731 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.584764 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.584788 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.687886 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.687957 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.687982 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.688013 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.688037 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.791474 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.791551 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.791560 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.791576 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.791586 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.802917 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:32 crc kubenswrapper[4713]: E0126 15:34:32.803171 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.806444 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.806496 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.806515 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.806535 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.806551 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4713]: E0126 15:34:32.824618 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.829675 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.829751 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.829761 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.829779 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.829790 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.831768 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 07:12:10.645724977 +0000 UTC Jan 26 15:34:32 crc kubenswrapper[4713]: E0126 15:34:32.845506 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.850504 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.850562 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.850574 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.850590 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.850603 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4713]: E0126 15:34:32.864878 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.869265 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.869347 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.869407 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.869440 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.869464 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4713]: E0126 15:34:32.899403 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.909398 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.909477 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.909492 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.909510 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.909526 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4713]: E0126 15:34:32.927849 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4713]: E0126 15:34:32.928072 4713 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.929806 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.929851 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.929880 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.929899 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4713]: I0126 15:34:32.929909 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.032673 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.032745 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.032768 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.032803 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.032826 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.136254 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.136349 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.136424 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.136461 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.136488 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.239534 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.239606 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.239630 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.239663 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.239690 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.343161 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.343240 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.343258 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.343285 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.343304 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.446688 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.446814 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.446834 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.446860 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.446879 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.549842 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.549923 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.549942 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.549971 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.550017 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.652490 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.652539 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.652552 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.652571 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.652585 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.757437 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.757491 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.757504 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.757533 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.757547 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.803249 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.803345 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.803408 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:33 crc kubenswrapper[4713]: E0126 15:34:33.803458 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:33 crc kubenswrapper[4713]: E0126 15:34:33.803634 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:33 crc kubenswrapper[4713]: E0126 15:34:33.803735 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.832790 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 17:06:58.527255658 +0000 UTC Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.860775 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.860828 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.860840 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.860861 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.860875 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.968620 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.968675 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.968692 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.968711 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4713]: I0126 15:34:33.968723 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.071601 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.071670 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.071693 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.071723 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.071747 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.174215 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.174258 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.174269 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.174305 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.174323 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.276635 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.276928 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.277006 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.277074 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.277140 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.380120 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.380175 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.380185 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.380205 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.380215 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.525230 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.525686 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.525842 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.525979 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.526107 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.533995 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.548610 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.562575 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.579920 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.595842 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.612847 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.629874 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.629925 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.629944 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.629972 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.630000 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.629894 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.655521 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\" 6155 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 15:34:22.532844 6155 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533176 6155 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533684 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533936 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 15:34:22.533959 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 15:34:22.533986 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:22.533998 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:22.534024 6155 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534125 6155 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534338 6155 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534831 6155 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.674214 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.690744 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.710220 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.724759 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.733208 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.733259 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.733273 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.733294 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.733308 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.743323 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.759707 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.774631 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.790631 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.803133 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:34 crc kubenswrapper[4713]: E0126 15:34:34.803322 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.812137 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.827133 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.833584 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:11:36.765714749 +0000 UTC Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.835645 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.835742 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.835762 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.835793 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.835814 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.855832 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.939270 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.939703 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.939866 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.940007 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4713]: I0126 15:34:34.940156 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.043446 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.043758 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.043901 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.044052 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.044179 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.148077 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.148600 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.148839 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.149060 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.149243 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.252487 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.252553 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.252575 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.252603 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.252625 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.356314 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.356777 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.356890 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.356991 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.357069 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.460354 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.460849 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.460962 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.461080 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.461209 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.566064 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.566162 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.566190 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.566221 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.566242 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.670015 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.670071 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.670088 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.670114 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.670130 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.773048 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.773294 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.773314 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.773336 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.773353 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.802787 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:35 crc kubenswrapper[4713]: E0126 15:34:35.802924 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.803642 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:35 crc kubenswrapper[4713]: E0126 15:34:35.803729 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.803865 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:35 crc kubenswrapper[4713]: E0126 15:34:35.804138 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.820076 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.834768 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:34:43.477126005 +0000 UTC Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.837230 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.866417 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\" 6155 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 15:34:22.532844 6155 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533176 6155 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533684 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533936 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 15:34:22.533959 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 15:34:22.533986 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:22.533998 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:22.534024 6155 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534125 6155 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534338 6155 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534831 6155 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.876313 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.876354 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.876384 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.876401 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.876413 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.889135 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.910619 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dabd84d8-5a82-4789-b965-655386c271f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.927836 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.943729 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.957409 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.971102 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.978582 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.978867 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.979027 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.979201 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.979390 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4713]: I0126 15:34:35.987305 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:35.999968 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.014974 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:36Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.030072 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:36Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.044413 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:36Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.070657 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:36Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.081512 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.081739 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.081833 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.081913 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.081996 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.088486 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:36Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.103178 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:36Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.114617 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:36Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.185553 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.185629 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.185646 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.185665 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.185697 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.288207 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.288262 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.288270 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.288284 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.288293 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.391718 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.391783 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.391797 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.391824 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.391840 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.495330 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.495817 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.495926 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.496026 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.496104 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.599727 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.600149 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.600417 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.600639 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.600856 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.704728 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.705307 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.705342 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.705402 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.705425 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.803090 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:36 crc kubenswrapper[4713]: E0126 15:34:36.803248 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.808984 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.809344 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.809522 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.809786 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.809905 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.835489 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 19:42:43.672049943 +0000 UTC Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.912762 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.912840 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.912860 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.912892 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4713]: I0126 15:34:36.912910 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.015759 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.015816 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.015828 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.015847 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.015864 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.118712 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.118776 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.118791 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.118819 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.118838 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.222127 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.222186 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.222198 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.222217 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.222230 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.226618 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs\") pod \"network-metrics-daemon-4vgps\" (UID: \"6f185439-f527-44bf-8362-a9cf40e00d3c\") " pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.226911 4713 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.227045 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs podName:6f185439-f527-44bf-8362-a9cf40e00d3c nodeName:}" failed. No retries permitted until 2026-01-26 15:34:53.227011549 +0000 UTC m=+68.364028944 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs") pod "network-metrics-daemon-4vgps" (UID: "6f185439-f527-44bf-8362-a9cf40e00d3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.325674 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.325736 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.325750 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.325772 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.325789 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.428794 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.429238 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.429345 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.429486 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.429629 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.532884 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.532935 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.532944 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.532962 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.532971 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.635618 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.635661 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.635679 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.635699 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.635714 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.733197 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.733260 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.733403 4713 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.733468 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:35:09.733452779 +0000 UTC m=+84.870470014 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.733476 4713 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.733618 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:35:09.733579373 +0000 UTC m=+84.870596648 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.738269 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.738324 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.738335 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.738357 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.738392 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.803539 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.803589 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.803638 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.804219 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.804301 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.804517 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.860866 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:12:58.517333337 +0000 UTC Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.861014 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:35:09.860990198 +0000 UTC m=+84.998007433 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.860925 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.861780 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.861857 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.861967 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.861988 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.862004 4713 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.862051 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:35:09.862039268 +0000 UTC m=+84.999056503 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.862061 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.862085 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.862102 4713 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:37 crc kubenswrapper[4713]: E0126 15:34:37.862155 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:35:09.862136621 +0000 UTC m=+84.999153896 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.862923 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.862952 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.862961 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.862973 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.862985 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.965549 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.965590 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.965602 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.965617 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4713]: I0126 15:34:37.965627 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.069241 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.069292 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.069302 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.069323 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.069336 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.171872 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.171918 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.171930 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.171949 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.171965 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.275435 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.275773 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.275839 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.275911 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.275979 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.378728 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.378774 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.378784 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.378801 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.378839 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.482489 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.482539 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.482551 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.482571 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.482585 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.584833 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.584912 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.584925 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.584949 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.584963 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.688886 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.688957 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.688975 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.689001 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.689020 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.792281 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.792354 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.792377 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.792398 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.792416 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.802562 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:38 crc kubenswrapper[4713]: E0126 15:34:38.802774 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.803638 4713 scope.go:117] "RemoveContainer" containerID="95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.862513 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:33:46.663103589 +0000 UTC Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.900656 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.900744 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.900757 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.900782 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4713]: I0126 15:34:38.900793 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.004912 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.004980 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.004994 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.005020 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.005039 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.107713 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.107757 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.107768 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.107786 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.107798 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.210505 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.210582 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.210602 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.210632 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.210652 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.313192 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.313226 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.313235 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.313250 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.313261 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.415807 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.415870 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.415888 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.415914 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.415930 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.519437 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.519477 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.519488 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.519505 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.519514 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.622249 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.622299 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.622308 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.622326 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.622337 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.727141 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.727552 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.727566 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.727586 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.727598 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.803489 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.803558 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.803658 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:39 crc kubenswrapper[4713]: E0126 15:34:39.803657 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:39 crc kubenswrapper[4713]: E0126 15:34:39.803853 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:39 crc kubenswrapper[4713]: E0126 15:34:39.803993 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.830649 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.830703 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.830715 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.830734 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.830748 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.863111 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:53:12.917380803 +0000 UTC Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.933736 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.933806 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.933821 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.933845 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4713]: I0126 15:34:39.933865 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.036543 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.036604 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.036616 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.036634 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.036649 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.139935 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.139995 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.140009 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.140037 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.140054 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.243039 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.243101 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.243120 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.243147 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.243161 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.259905 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/1.log" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.264780 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerStarted","Data":"bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd"} Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.265404 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.289190 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.306260 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.323515 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.345787 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.345848 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.345869 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.345896 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.345912 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.348883 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.364734 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.386122 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.408665 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.422110 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.437213 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.448929 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.448972 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.448986 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.449003 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.449016 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.449708 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.466273 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.497037 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\" 6155 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 15:34:22.532844 6155 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533176 6155 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533684 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533936 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 15:34:22.533959 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 15:34:22.533986 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:22.533998 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:22.534024 6155 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534125 6155 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534338 6155 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534831 6155 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.516666 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.531296 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dabd84d8-5a82-4789-b965-655386c271f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.547342 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.551471 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.551526 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.551550 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.551573 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.551592 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.568147 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.582007 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.599392 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.654131 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.654184 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.654197 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.654215 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.654227 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.757139 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.757197 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.757215 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.757237 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.757250 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.803300 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:40 crc kubenswrapper[4713]: E0126 15:34:40.803533 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.860946 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.861011 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.861033 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.861062 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.861080 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.863301 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:09:39.585296267 +0000 UTC Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.964810 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.964844 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.964854 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.964870 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4713]: I0126 15:34:40.964879 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.068383 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.068727 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.068823 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.068900 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.068969 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.171750 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.171796 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.171805 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.171819 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.171829 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.273586 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/2.log" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.274244 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.274298 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.274316 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.274345 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.274403 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.275359 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/1.log" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.281558 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerID="bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd" exitCode=1 Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.281617 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerDied","Data":"bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd"} Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.281695 4713 scope.go:117] "RemoveContainer" containerID="95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.282839 4713 scope.go:117] "RemoveContainer" containerID="bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd" Jan 26 15:34:41 crc kubenswrapper[4713]: E0126 15:34:41.283157 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.304605 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.321267 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.340488 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.360869 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dabd84d8-5a82-4789-b965-655386c271f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.380282 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.380326 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.380340 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.380385 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.380402 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.380891 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.397588 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.409403 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.432180 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.446960 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.462967 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.476986 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.482712 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.482749 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.482758 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.482772 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.482781 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.498102 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.513640 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.526192 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.543604 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.558023 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.571103 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.585856 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.585919 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.585931 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.585947 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.585958 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.593702 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d7dba0efb8e0646b48849747adfcd732f032912af083879e04e34717d6331a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"message\\\":\\\" 6155 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 15:34:22.532844 6155 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533176 6155 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533684 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.533936 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 15:34:22.533959 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 15:34:22.533986 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:22.533998 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:22.534024 6155 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534125 6155 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534338 6155 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:22.534831 6155 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:40Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:40.011676 6374 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:40.011832 6374 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 15:34:40.011846 6374 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 15:34:40.011977 6374 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:40.012038 6374 factory.go:656] Stopping watch factory\\\\nI0126 15:34:40.012061 6374 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:40.012130 6374 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:40.012149 6374 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:40.012162 6374 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:40.012177 6374 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:40.012189 6374 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:40.012217 6374 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 15:34:40.012234 6374 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:40.012254 6374 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:40.012273 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:40.012415 6374 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.689450 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.689874 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.690046 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.690155 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.690308 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.793617 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.793680 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.793702 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.793734 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.793756 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.802816 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.802885 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:41 crc kubenswrapper[4713]: E0126 15:34:41.802983 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.803023 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:41 crc kubenswrapper[4713]: E0126 15:34:41.803173 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:41 crc kubenswrapper[4713]: E0126 15:34:41.803276 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.864246 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:04:29.686408324 +0000 UTC Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.897704 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.897754 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.897767 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.897792 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4713]: I0126 15:34:41.897808 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.000898 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.000931 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.000941 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.000954 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.000964 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.105410 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.105469 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.105482 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.105501 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.105513 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.209322 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.209391 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.209403 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.209421 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.209433 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.287556 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/2.log" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.291353 4713 scope.go:117] "RemoveContainer" containerID="bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd" Jan 26 15:34:42 crc kubenswrapper[4713]: E0126 15:34:42.291548 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.312248 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.312795 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.312828 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.312841 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.312857 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.312870 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.344259 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:40Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:40.011676 6374 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:40.011832 6374 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 15:34:40.011846 6374 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 15:34:40.011977 6374 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:40.012038 6374 factory.go:656] Stopping watch factory\\\\nI0126 15:34:40.012061 6374 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:40.012130 6374 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:40.012149 6374 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:40.012162 6374 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:40.012177 6374 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:40.012189 6374 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:40.012217 6374 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 15:34:40.012234 6374 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:40.012254 6374 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:40.012273 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:40.012415 6374 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.361448 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.379308 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.394139 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.412220 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.415512 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.415558 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.415572 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.415592 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.415604 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.426214 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.440300 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.458087 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.473866 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dabd84d8-5a82-4789-b965-655386c271f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.489985 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.510068 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.518702 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.518750 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.518763 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.518779 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.518790 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.524300 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.545324 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.558045 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.570875 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.580817 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.595592 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.621493 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.621542 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.621552 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.621571 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.621583 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.723966 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.724023 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.724032 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.724050 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.724062 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.802875 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:42 crc kubenswrapper[4713]: E0126 15:34:42.803119 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.828035 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.828090 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.828110 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.828133 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.828153 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.864948 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:15:01.142546048 +0000 UTC Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.931257 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.931323 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.931341 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.931809 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.931861 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.978441 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.978548 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.978604 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.978630 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4713]: I0126 15:34:42.978653 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4713]: E0126 15:34:43.000807 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.006904 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.006951 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.006967 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.006989 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.007009 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4713]: E0126 15:34:43.028145 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:43Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.033890 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.034030 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.034061 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.034105 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.034144 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4713]: E0126 15:34:43.058498 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:43Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.064209 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.064313 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.064332 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.064392 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.064415 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4713]: E0126 15:34:43.087015 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:43Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.092273 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.092477 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.092611 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.092702 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.092781 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4713]: E0126 15:34:43.113277 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:43Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:43 crc kubenswrapper[4713]: E0126 15:34:43.113634 4713 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.115937 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.116310 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.116450 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.116549 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.116823 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.220716 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.220809 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.220831 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.220863 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.220884 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.324430 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.324495 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.324513 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.324540 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.324557 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.427789 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.427851 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.427874 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.427905 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.427931 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.530535 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.530586 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.530596 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.530612 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.530623 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.632989 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.633351 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.633461 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.633580 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.633673 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.736877 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.737223 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.737350 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.737535 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.737691 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.803625 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.803774 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:43 crc kubenswrapper[4713]: E0126 15:34:43.803818 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:43 crc kubenswrapper[4713]: E0126 15:34:43.803977 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.803673 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:43 crc kubenswrapper[4713]: E0126 15:34:43.804514 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.841103 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.841580 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.841803 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.842005 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.842223 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.865473 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:31:32.190222114 +0000 UTC Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.945673 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.945739 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.945762 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.945793 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4713]: I0126 15:34:43.945820 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.049012 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.049311 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.049452 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.049611 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.049721 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.152701 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.153168 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.153322 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.153508 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.153634 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.256977 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.257019 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.257033 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.257048 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.257059 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.359590 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.359653 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.359670 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.359695 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.359712 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.462448 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.462518 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.462538 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.462563 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.462580 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.566041 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.566104 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.566115 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.566133 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.566146 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.669545 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.669602 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.669620 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.669644 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.669661 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.773169 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.773235 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.773253 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.773279 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.773299 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.802655 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:44 crc kubenswrapper[4713]: E0126 15:34:44.802834 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.867175 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:53:36.344980395 +0000 UTC Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.876551 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.876623 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.876641 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.876670 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.876689 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.979970 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.980048 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.980066 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.980092 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4713]: I0126 15:34:44.980109 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.084544 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.085060 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.085262 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.085502 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.085624 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.189137 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.189658 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.189882 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.190165 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.190397 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.293467 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.293544 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.293561 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.293587 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.293604 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.398874 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.398927 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.398942 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.398963 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.398982 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.502567 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.502610 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.502623 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.502641 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.502655 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.612060 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.612166 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.612176 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.612193 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.612204 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.715646 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.715688 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.715703 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.715726 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.715743 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.802480 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.802628 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:45 crc kubenswrapper[4713]: E0126 15:34:45.802787 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.802819 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:45 crc kubenswrapper[4713]: E0126 15:34:45.802908 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:45 crc kubenswrapper[4713]: E0126 15:34:45.803035 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.818652 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.818720 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.818730 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.818753 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.818765 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.823900 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:45Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.840496 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:45Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.861086 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:45Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.868279 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:16:11.764069913 +0000 UTC Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.880231 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:45Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.894501 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:45Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.912797 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:45Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.921127 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.921184 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.921196 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.921213 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.921223 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.927952 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:45Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.940546 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:45Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.954438 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:45Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.968289 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:45Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:45 crc kubenswrapper[4713]: I0126 15:34:45.979385 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:45Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.000689 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:40Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:40.011676 6374 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:40.011832 6374 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 15:34:40.011846 6374 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 15:34:40.011977 6374 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:40.012038 6374 factory.go:656] Stopping watch factory\\\\nI0126 15:34:40.012061 6374 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:40.012130 6374 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:40.012149 6374 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:40.012162 6374 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:40.012177 6374 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:40.012189 6374 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:40.012217 6374 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 15:34:40.012234 6374 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:40.012254 6374 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:40.012273 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:40.012415 6374 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:45Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.016290 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:46Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.023444 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.023514 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.023527 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.023547 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.023559 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.030586 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dabd84d8-5a82-4789-b965-655386c271f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:46Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.048691 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:46Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.064678 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:46Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.077846 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:46Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.099670 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:46Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.126116 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.126202 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.126221 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.126249 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.126265 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.229441 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.230324 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.230461 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.230569 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.230663 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.333413 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.333482 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.333500 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.333527 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.333547 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.436462 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.436500 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.436511 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.436528 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.436539 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.540803 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.540869 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.540893 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.540928 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.540953 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.643961 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.644002 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.644011 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.644024 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.644033 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.747273 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.747413 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.747431 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.747480 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.747500 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.803279 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:46 crc kubenswrapper[4713]: E0126 15:34:46.803628 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.851744 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.851835 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.851856 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.851883 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.851902 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.868660 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:47:54.154377553 +0000 UTC Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.955023 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.955077 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.955087 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.955105 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4713]: I0126 15:34:46.955116 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.058004 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.058057 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.058071 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.058089 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.058102 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.162039 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.162114 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.162133 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.162162 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.162179 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.264912 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.264978 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.264998 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.265025 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.265045 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.368300 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.368355 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.368404 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.368425 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.368440 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.472926 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.472969 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.472980 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.472996 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.473008 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.576564 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.576633 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.576656 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.576684 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.576707 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.679955 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.680005 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.680017 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.680036 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.680049 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.782425 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.782500 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.782536 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.782567 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.782589 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.803205 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.803612 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:47 crc kubenswrapper[4713]: E0126 15:34:47.803727 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.803810 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:47 crc kubenswrapper[4713]: E0126 15:34:47.803966 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:47 crc kubenswrapper[4713]: E0126 15:34:47.804024 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.869815 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 15:48:28.349681303 +0000 UTC Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.886746 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.886820 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.886845 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.886874 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.886895 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.989872 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.989944 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.989968 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.989998 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4713]: I0126 15:34:47.990023 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.092779 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.092840 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.092854 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.092873 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.092888 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.197481 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.197594 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.197614 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.197639 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.197659 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.302806 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.302863 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.302880 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.302904 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.302923 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.405426 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.405482 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.405494 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.405513 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.405527 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.508474 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.508531 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.508546 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.508571 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.508586 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.612101 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.612147 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.612156 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.612172 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.612183 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.716020 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.716090 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.716110 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.716137 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.716155 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.803217 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:48 crc kubenswrapper[4713]: E0126 15:34:48.803473 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.819181 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.819267 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.819280 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.819327 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.819340 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.870460 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:43:37.088089816 +0000 UTC Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.922664 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.922728 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.922747 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.922773 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4713]: I0126 15:34:48.922790 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.026636 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.026704 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.026722 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.026747 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.026764 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.129203 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.129259 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.129273 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.129294 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.129307 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.231917 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.231981 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.231998 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.232025 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.232047 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.334013 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.334438 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.334586 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.334819 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.334984 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.438518 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.439002 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.439157 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.439321 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.439488 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.542701 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.542770 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.542790 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.542815 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.542835 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.645796 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.645893 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.645916 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.645944 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.645963 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.749751 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.749839 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.749896 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.749953 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.749970 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.807640 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:49 crc kubenswrapper[4713]: E0126 15:34:49.807825 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.808116 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:49 crc kubenswrapper[4713]: E0126 15:34:49.808218 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.808443 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:49 crc kubenswrapper[4713]: E0126 15:34:49.808535 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.852656 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.852700 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.852710 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.852727 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.852738 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.871128 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:34:07.479127468 +0000 UTC Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.956233 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.956592 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.956668 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.956749 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4713]: I0126 15:34:49.956821 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.059885 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.059958 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.059977 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.060004 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.060022 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.163549 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.163636 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.163686 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.163712 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.163763 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.267482 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.267542 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.267561 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.267580 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.267592 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.371048 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.371094 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.371108 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.371127 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.371139 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.475579 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.475664 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.475683 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.476218 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.476277 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.580088 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.580139 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.580153 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.580177 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.580190 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.682594 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.682664 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.682677 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.682726 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.682738 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.785325 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.785386 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.785396 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.785415 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.785424 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.802582 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:50 crc kubenswrapper[4713]: E0126 15:34:50.803177 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.872142 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:29:07.718813359 +0000 UTC Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.888587 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.888889 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.889076 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.889271 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.889609 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.992819 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.992860 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.992871 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.992893 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4713]: I0126 15:34:50.992904 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.096529 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.096590 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.096607 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.096629 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.096645 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.199496 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.199879 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.200072 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.200205 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.200382 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.303540 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.303631 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.303665 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.303714 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.303746 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.406540 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.406594 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.406612 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.406635 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.406655 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.509179 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.509232 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.509246 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.509268 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.509283 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.615115 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.615183 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.615203 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.615232 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.615252 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.718171 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.718258 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.718276 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.718307 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.718327 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.803655 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.803774 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:51 crc kubenswrapper[4713]: E0126 15:34:51.803883 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.804006 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:51 crc kubenswrapper[4713]: E0126 15:34:51.804246 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:51 crc kubenswrapper[4713]: E0126 15:34:51.804459 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.820219 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.820262 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.820275 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.820297 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.820311 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.872727 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:59:28.33900505 +0000 UTC Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.923006 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.923076 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.923094 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.923116 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4713]: I0126 15:34:51.923132 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.026600 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.026650 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.026661 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.026678 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.026689 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.129836 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.129886 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.129899 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.129913 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.129923 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.231734 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.231782 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.231795 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.231814 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.231827 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.333891 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.333933 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.333943 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.333960 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.333974 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.436474 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.436541 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.436559 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.436587 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.436605 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.540058 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.540130 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.540156 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.540190 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.540216 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.643059 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.643122 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.643142 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.643169 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.643183 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.746388 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.746431 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.746443 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.746464 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.746478 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.803182 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:52 crc kubenswrapper[4713]: E0126 15:34:52.803565 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.816053 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.848868 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.848941 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.848961 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.848991 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.849011 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.873028 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 11:46:54.967568095 +0000 UTC Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.951749 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.951804 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.951818 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.951839 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4713]: I0126 15:34:52.951853 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.054151 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.054186 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.054196 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.054212 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.054222 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.157729 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.157801 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.157818 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.157841 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.157859 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.249780 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs\") pod \"network-metrics-daemon-4vgps\" (UID: \"6f185439-f527-44bf-8362-a9cf40e00d3c\") " pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:53 crc kubenswrapper[4713]: E0126 15:34:53.250082 4713 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:53 crc kubenswrapper[4713]: E0126 15:34:53.250225 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs podName:6f185439-f527-44bf-8362-a9cf40e00d3c nodeName:}" failed. No retries permitted until 2026-01-26 15:35:25.250198064 +0000 UTC m=+100.387215499 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs") pod "network-metrics-daemon-4vgps" (UID: "6f185439-f527-44bf-8362-a9cf40e00d3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.260723 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.260785 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.260800 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.260825 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.260838 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.363518 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.363571 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.363586 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.363604 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.363648 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.387814 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.387867 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.387878 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.387901 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.387913 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4713]: E0126 15:34:53.407326 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:53Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.412981 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.413034 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.413049 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.413070 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.413085 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4713]: E0126 15:34:53.433220 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:53Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.437211 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.437255 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.437271 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.437286 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.437297 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4713]: E0126 15:34:53.457168 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:53Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.465142 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.465207 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.465220 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.465238 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.465256 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4713]: E0126 15:34:53.479341 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:53Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.482392 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.482422 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.482433 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.482452 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.482466 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4713]: E0126 15:34:53.498278 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:53Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:53 crc kubenswrapper[4713]: E0126 15:34:53.498456 4713 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.499832 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.499886 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.499899 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.499910 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.499922 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.602891 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.602939 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.602952 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.602968 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.602982 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.705538 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.705655 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.705669 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.705690 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.705705 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.802986 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.803098 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:53 crc kubenswrapper[4713]: E0126 15:34:53.803161 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.803219 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:53 crc kubenswrapper[4713]: E0126 15:34:53.803415 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:53 crc kubenswrapper[4713]: E0126 15:34:53.803459 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.804250 4713 scope.go:117] "RemoveContainer" containerID="bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd" Jan 26 15:34:53 crc kubenswrapper[4713]: E0126 15:34:53.804462 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.807278 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.807302 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.807310 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.807323 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.807334 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.873933 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:59:13.113607489 +0000 UTC Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.910380 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.910444 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.910458 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.910484 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4713]: I0126 15:34:53.910498 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.013973 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.014042 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.014062 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.014090 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.014105 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.117091 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.117134 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.117146 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.117166 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.117179 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.219982 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.220023 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.221067 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.221132 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.221149 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.324749 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.324839 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.324860 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.324889 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.324909 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.427706 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.427764 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.427778 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.427796 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.427812 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.530327 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.530410 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.530426 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.530452 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.530469 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.633644 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.633708 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.633725 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.633754 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.633776 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.736698 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.736739 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.736748 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.736764 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.736774 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.803222 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:54 crc kubenswrapper[4713]: E0126 15:34:54.803423 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.839929 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.839983 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.839995 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.840016 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.840029 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.874574 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:56:23.603569378 +0000 UTC Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.943706 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.943790 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.943815 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.943842 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4713]: I0126 15:34:54.943859 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.046109 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.046159 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.046171 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.046188 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.046200 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.149341 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.149415 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.149427 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.149448 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.149464 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.252579 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.252630 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.252642 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.252658 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.252671 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.355137 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.355214 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.355227 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.355247 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.355261 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.457282 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.457334 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.457347 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.457392 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.457406 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.560577 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.560649 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.560673 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.560740 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.560763 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.662864 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.662902 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.662913 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.662928 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.662942 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.765798 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.765837 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.765847 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.765861 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.765876 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.802784 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.802793 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:55 crc kubenswrapper[4713]: E0126 15:34:55.802922 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.803087 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:55 crc kubenswrapper[4713]: E0126 15:34:55.803225 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:55 crc kubenswrapper[4713]: E0126 15:34:55.803337 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.819053 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.830201 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.842277 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.859140 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.868266 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.868484 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.868568 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.868653 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.868737 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.875607 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:33:08.189922889 +0000 UTC Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.877288 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dabd84d8-5a82-4789-b965-655386c271f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.891700 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.911816 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.923593 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.947103 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.962713 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.971227 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.971250 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.971277 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.971292 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.971302 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.977568 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4713]: I0126 15:34:55.990290 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.003825 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51671fd-08ab-4ba8-a770-b08b39c4de88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24ccbe375d40bd63a664c32c9a308c1127bcd914d25bbfbb991bbdf0d7d3108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.022801 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.035574 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.053196 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:40Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:40.011676 6374 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:40.011832 6374 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 15:34:40.011846 6374 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 15:34:40.011977 6374 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:40.012038 6374 factory.go:656] Stopping watch factory\\\\nI0126 15:34:40.012061 6374 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:40.012130 6374 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:40.012149 6374 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:40.012162 6374 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:40.012177 6374 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:40.012189 6374 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:40.012217 6374 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 15:34:40.012234 6374 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:40.012254 6374 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:40.012273 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:40.012415 6374 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.068077 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.074929 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.074978 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.074993 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.075013 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.075028 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.083116 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.095186 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.177412 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.177480 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.177494 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.177511 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.177523 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.280398 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.280462 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.280472 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.280489 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.280501 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.384054 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.384101 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.384113 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.384129 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.384141 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.487937 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.487988 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.488022 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.488041 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.488056 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.591612 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.591684 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.591697 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.591911 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.591924 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.694047 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.694078 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.694087 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.694100 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.694109 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.796625 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.796682 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.796701 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.796728 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.796745 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.802930 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:56 crc kubenswrapper[4713]: E0126 15:34:56.803126 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.876785 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:40:28.651597967 +0000 UTC Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.898995 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.899024 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.899034 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.899050 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4713]: I0126 15:34:56.899060 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.000895 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.001121 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.001205 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.001300 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.001442 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.104280 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.104324 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.104333 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.104349 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.104383 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.206721 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.206777 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.206789 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.206812 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.206826 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.309768 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.310065 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.310140 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.310206 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.310265 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.344966 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4ld7b_d21f731c-7a63-4c3c-bdc5-9267197741d4/kube-multus/0.log" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.345098 4713 generic.go:334] "Generic (PLEG): container finished" podID="d21f731c-7a63-4c3c-bdc5-9267197741d4" containerID="5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79" exitCode=1 Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.345183 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4ld7b" event={"ID":"d21f731c-7a63-4c3c-bdc5-9267197741d4","Type":"ContainerDied","Data":"5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79"} Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.345678 4713 scope.go:117] "RemoveContainer" containerID="5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.359748 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.371211 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.392079 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.408221 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dabd84d8-5a82-4789-b965-655386c271f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.413663 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.413710 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.413723 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.413741 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.413754 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.428719 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.442733 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.455395 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.510932 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.516636 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.516672 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.516682 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.516697 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.516709 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.529081 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.549613 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:56Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc\\\\n2026-01-26T15:34:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc to /host/opt/cni/bin/\\\\n2026-01-26T15:34:11Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:34:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.565482 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.577087 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51671fd-08ab-4ba8-a770-b08b39c4de88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24ccbe375d40bd63a664c32c9a308c1127bcd914d25bbfbb991bbdf0d7d3108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.592525 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.605749 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.615516 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.619772 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.619806 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.619818 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.619836 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.619850 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.628668 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.642572 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.655488 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.679951 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:40Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:40.011676 6374 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:40.011832 6374 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 15:34:40.011846 6374 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 15:34:40.011977 6374 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:40.012038 6374 factory.go:656] Stopping watch factory\\\\nI0126 15:34:40.012061 6374 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:40.012130 6374 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:40.012149 6374 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:40.012162 6374 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:40.012177 6374 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:40.012189 6374 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:40.012217 6374 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 15:34:40.012234 6374 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:40.012254 6374 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:40.012273 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:40.012415 6374 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.722078 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.722438 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.722470 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.722495 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.722523 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.803451 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.803521 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.803568 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:57 crc kubenswrapper[4713]: E0126 15:34:57.803627 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:57 crc kubenswrapper[4713]: E0126 15:34:57.803773 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:57 crc kubenswrapper[4713]: E0126 15:34:57.804069 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.824624 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.824670 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.824684 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.824703 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.824716 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.878312 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:37:56.17329634 +0000 UTC Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.928051 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.928121 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.928136 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.928159 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4713]: I0126 15:34:57.928173 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.031430 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.031501 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.031519 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.031542 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.031558 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.135002 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.135061 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.135079 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.135101 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.135117 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.237664 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.237702 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.237713 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.237728 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.237740 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.340781 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.340831 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.340843 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.340860 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.340875 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.351561 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4ld7b_d21f731c-7a63-4c3c-bdc5-9267197741d4/kube-multus/0.log" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.351633 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4ld7b" event={"ID":"d21f731c-7a63-4c3c-bdc5-9267197741d4","Type":"ContainerStarted","Data":"81fef6986044de1cc82fda7f41ffadb687ecdbc3047ddd68f2d4f21ee6698e77"} Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.365341 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.376587 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51671fd-08ab-4ba8-a770-b08b39c4de88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24ccbe375d40bd63a664c32c9a308c1127bcd914d25bbfbb991bbdf0d7d3108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.390315 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.403961 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.420394 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:40Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:40.011676 6374 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:40.011832 6374 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 15:34:40.011846 6374 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 15:34:40.011977 6374 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:40.012038 6374 factory.go:656] Stopping watch factory\\\\nI0126 15:34:40.012061 6374 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:40.012130 6374 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:40.012149 6374 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:40.012162 6374 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:40.012177 6374 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:40.012189 6374 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:40.012217 6374 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 15:34:40.012234 6374 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:40.012254 6374 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:40.012273 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:40.012415 6374 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.438816 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.442986 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.443025 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.443037 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.443054 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.443067 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.452426 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.466185 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.481519 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.491669 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.505307 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.523708 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.541382 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dabd84d8-5a82-4789-b965-655386c271f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.545133 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.545160 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.545172 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.545188 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.545200 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.558697 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.574804 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.592577 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.615592 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.628686 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.642747 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fef6986044de1cc82fda7f41ffadb687ecdbc3047ddd68f2d4f21ee6698e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:56Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc\\\\n2026-01-26T15:34:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc to /host/opt/cni/bin/\\\\n2026-01-26T15:34:11Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:34:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.647804 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.647851 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.647865 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.647889 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.647904 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.750322 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.750382 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.750392 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.750409 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.750421 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.803248 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:34:58 crc kubenswrapper[4713]: E0126 15:34:58.803457 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.852675 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.852720 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.852730 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.852746 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.852758 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.878944 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:13:12.085113307 +0000 UTC Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.955724 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.955770 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.955779 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.955794 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4713]: I0126 15:34:58.955804 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.059435 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.059589 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.059616 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.059641 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.059693 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.163168 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.163239 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.163249 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.163268 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.163277 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.265559 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.265614 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.265634 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.265657 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.265675 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.368740 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.368793 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.368807 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.368825 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.368838 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.471215 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.471295 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.471310 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.471328 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.471341 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.574455 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.574505 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.574523 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.574547 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.574565 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.678124 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.678171 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.678187 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.678207 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.678224 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.781061 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.781116 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.781128 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.781151 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.781166 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.803297 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.803417 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:59 crc kubenswrapper[4713]: E0126 15:34:59.803541 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:59 crc kubenswrapper[4713]: E0126 15:34:59.803657 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.803953 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:59 crc kubenswrapper[4713]: E0126 15:34:59.804237 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.879723 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 05:26:53.739329236 +0000 UTC Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.883824 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.883870 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.883882 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.883899 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.883914 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.986811 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.986901 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.986913 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.986943 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4713]: I0126 15:34:59.986960 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.089583 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.089626 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.089658 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.089676 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.089689 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.193387 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.193441 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.193456 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.193476 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.193488 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.296459 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.296508 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.296520 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.296541 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.296554 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.400105 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.400167 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.400184 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.400208 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.400225 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.502902 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.502970 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.502983 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.503005 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.503018 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.606420 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.606480 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.606491 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.606513 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.606524 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.709984 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.710063 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.710086 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.710114 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.710137 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.802524 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:00 crc kubenswrapper[4713]: E0126 15:35:00.802741 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.812764 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.812828 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.812850 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.812877 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.812902 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.880791 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:40:12.393953826 +0000 UTC Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.916076 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.916146 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.916159 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.916179 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4713]: I0126 15:35:00.916195 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.019239 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.019275 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.019288 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.019305 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.019317 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.122726 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.122770 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.122780 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.122796 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.122810 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.225510 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.225572 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.225590 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.225644 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.225660 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.328275 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.328393 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.328410 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.328431 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.328444 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.432632 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.432706 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.432725 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.432750 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.432770 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.535543 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.535591 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.535602 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.535619 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.535630 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.637892 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.637965 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.637976 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.637995 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.638007 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.740781 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.740839 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.740852 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.740874 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.740889 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.802637 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.802692 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.802785 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:01 crc kubenswrapper[4713]: E0126 15:35:01.803016 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:01 crc kubenswrapper[4713]: E0126 15:35:01.803154 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:01 crc kubenswrapper[4713]: E0126 15:35:01.803354 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.844898 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.844973 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.844984 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.845001 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.845012 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.881634 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 21:04:18.08263769 +0000 UTC Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.948821 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.948914 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.948941 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.948973 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4713]: I0126 15:35:01.948992 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.052734 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.052800 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.052820 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.052847 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.052865 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.156020 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.156086 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.156104 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.156161 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.156183 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.259745 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.259787 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.259803 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.259826 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.259842 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.363432 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.363474 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.363487 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.363502 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.363515 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.467294 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.467335 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.467345 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.467379 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.467393 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.570425 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.570629 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.570666 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.570697 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.570721 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.673514 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.673563 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.673582 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.673608 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.673621 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.776088 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.776162 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.776172 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.776201 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.776212 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.803015 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:02 crc kubenswrapper[4713]: E0126 15:35:02.803249 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.878793 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.878839 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.878848 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.878865 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.878876 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.882503 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:21:25.321238307 +0000 UTC Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.981738 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.981784 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.981798 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.981822 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4713]: I0126 15:35:02.981834 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.086058 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.086124 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.086139 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.086163 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.086181 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.190208 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.190259 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.190271 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.190299 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.190315 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.294012 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.294109 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.294130 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.294192 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.294207 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.397428 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.397483 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.397507 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.397529 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.397545 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.500970 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.501001 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.501009 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.501022 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.501033 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.604311 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.604347 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.604357 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.604393 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.604403 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.707577 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.707701 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.707729 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.707762 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.707786 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.735999 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.736080 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.736104 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.736137 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.736201 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4713]: E0126 15:35:03.760728 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:03Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.765755 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.765796 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.765807 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.765822 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.765833 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4713]: E0126 15:35:03.782113 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:03Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.786523 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.786834 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.786848 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.786868 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.786881 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4713]: E0126 15:35:03.802142 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:03Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.802616 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.802631 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:03 crc kubenswrapper[4713]: E0126 15:35:03.802752 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.802824 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:03 crc kubenswrapper[4713]: E0126 15:35:03.802958 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:03 crc kubenswrapper[4713]: E0126 15:35:03.803072 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.806851 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.806894 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.806907 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.806925 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.806940 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4713]: E0126 15:35:03.821770 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:03Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.827860 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.827911 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.827922 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.827941 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.827954 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4713]: E0126 15:35:03.842065 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:03Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:03 crc kubenswrapper[4713]: E0126 15:35:03.842231 4713 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.843820 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.843907 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.843926 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.843948 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.843960 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.883694 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 11:07:26.496037023 +0000 UTC Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.946936 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.946989 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.947001 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.947020 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4713]: I0126 15:35:03.947033 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.050062 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.050116 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.050128 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.050147 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.050162 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.152644 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.152682 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.152691 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.152709 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.152718 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.255678 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.255737 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.255746 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.255769 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.255779 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.359407 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.359460 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.359473 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.359497 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.359512 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.462544 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.462613 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.462626 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.462644 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.462671 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.565637 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.565703 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.565720 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.565745 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.565764 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.669570 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.669616 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.669629 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.669648 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.669663 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.772928 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.772964 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.772975 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.772993 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.773005 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.802901 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:04 crc kubenswrapper[4713]: E0126 15:35:04.803069 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.804067 4713 scope.go:117] "RemoveContainer" containerID="bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.875501 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.875554 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.875564 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.875582 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.875594 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.883849 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:17:05.955800116 +0000 UTC Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.986957 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.987005 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.987018 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.987040 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4713]: I0126 15:35:04.987055 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.090664 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.090735 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.090752 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.090776 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.090798 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.194097 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.194157 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.194169 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.194189 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.194203 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.297046 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.297101 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.297116 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.297134 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.297150 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.379182 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/2.log" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.382735 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerStarted","Data":"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f"} Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.383749 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.400819 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.401205 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.401447 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.401457 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.401472 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.401483 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.415522 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.431141 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.452499 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:40Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:40.011676 6374 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:40.011832 6374 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 15:34:40.011846 6374 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 15:34:40.011977 6374 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:40.012038 6374 factory.go:656] Stopping watch factory\\\\nI0126 15:34:40.012061 6374 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:40.012130 6374 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:40.012149 6374 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:40.012162 6374 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:40.012177 6374 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:40.012189 6374 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:40.012217 6374 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 15:34:40.012234 6374 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:40.012254 6374 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:40.012273 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:40.012415 6374 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.467018 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.481342 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.493928 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.504127 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.504175 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.504184 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.504201 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.504212 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.508497 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dabd84d8-5a82-4789-b965-655386c271f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.527329 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.544291 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.557280 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.596580 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.607324 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.607619 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.607736 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.607836 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.607924 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.612744 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.626720 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fef6986044de1cc82fda7f41ffadb687ecdbc3047ddd68f2d4f21ee6698e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:56Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc\\\\n2026-01-26T15:34:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc to /host/opt/cni/bin/\\\\n2026-01-26T15:34:11Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:34:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.641577 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.651973 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51671fd-08ab-4ba8-a770-b08b39c4de88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24ccbe375d40bd63a664c32c9a308c1127bcd914d25bbfbb991bbdf0d7d3108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.667519 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.680739 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.692017 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.710731 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.710774 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.710782 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.710799 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.710810 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.803235 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:05 crc kubenswrapper[4713]: E0126 15:35:05.803379 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.803592 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.803628 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:05 crc kubenswrapper[4713]: E0126 15:35:05.803761 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:05 crc kubenswrapper[4713]: E0126 15:35:05.803821 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.812681 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.812731 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.812744 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.812763 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.812776 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.822781 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.847992 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:40Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:40.011676 6374 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:40.011832 6374 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 15:34:40.011846 6374 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 15:34:40.011977 6374 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:40.012038 6374 factory.go:656] Stopping watch factory\\\\nI0126 15:34:40.012061 6374 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:40.012130 6374 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:40.012149 6374 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:40.012162 6374 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:40.012177 6374 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:40.012189 6374 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:40.012217 6374 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 15:34:40.012234 6374 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:40.012254 6374 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:40.012273 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:40.012415 6374 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.866958 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.883546 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.884489 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:24:32.467047279 +0000 UTC Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.899440 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.912675 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.916297 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.916326 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.916333 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.916347 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.916357 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.921843 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.939537 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.954476 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.967712 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dabd84d8-5a82-4789-b965-655386c271f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.979315 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fef6986044de1cc82fda7f41ffadb687ecdbc3047ddd68f2d4f21ee6698e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:56Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc\\\\n2026-01-26T15:34:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc to /host/opt/cni/bin/\\\\n2026-01-26T15:34:11Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:34:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:05 crc kubenswrapper[4713]: I0126 15:35:05.992721 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:05Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.003521 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.019121 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.019398 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.019490 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.019568 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.019658 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.027449 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.040789 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.059996 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.073020 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.089550 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51671fd-08ab-4ba8-a770-b08b39c4de88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24ccbe375d40bd63a664c32c9a308c1127bcd914d25bbfbb991bbdf0d7d3108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.106046 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.123332 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.123392 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.123404 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.123422 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.123435 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.226781 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.226834 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.226853 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.226883 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.226906 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.331199 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.331265 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.331285 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.331317 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.331340 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.388634 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/3.log" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.389385 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/2.log" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.392388 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerID="fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f" exitCode=1 Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.392432 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerDied","Data":"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f"} Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.392481 4713 scope.go:117] "RemoveContainer" containerID="bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.393492 4713 scope.go:117] "RemoveContainer" containerID="fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f" Jan 26 15:35:06 crc kubenswrapper[4713]: E0126 15:35:06.393745 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.416246 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.434985 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.435042 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.435056 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.435074 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.435086 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.436527 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.448473 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.460208 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51671fd-08ab-4ba8-a770-b08b39c4de88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24ccbe375d40bd63a664c32c9a308c1127bcd914d25bbfbb991bbdf0d7d3108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.472744 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.484628 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.515515 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc85d533b39631212867168cbade5fa10e103e8f1c26539578c3d74490d8a7bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:40Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:40.011676 6374 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:40.011832 6374 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 15:34:40.011846 6374 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 15:34:40.011977 6374 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:40.012038 6374 factory.go:656] Stopping watch factory\\\\nI0126 15:34:40.012061 6374 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:40.012130 6374 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:40.012149 6374 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:40.012162 6374 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:40.012177 6374 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:40.012189 6374 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:40.012217 6374 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 15:34:40.012234 6374 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:40.012254 6374 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:40.012273 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:40.012415 6374 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:06Z\\\",\\\"message\\\":\\\"66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0126 15:35:06.075501 6735 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.530906 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.538070 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.538115 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.538127 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.538144 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.538158 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.544396 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dabd84d8-5a82-4789-b965-655386c271f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.557684 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.572340 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.586428 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.598811 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.612396 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.625019 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.637565 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fef6986044de1cc82fda7f41ffadb687ecdbc3047ddd68f2d4f21ee6698e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:56Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc\\\\n2026-01-26T15:34:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc to /host/opt/cni/bin/\\\\n2026-01-26T15:34:11Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:34:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.641098 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.641140 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.641149 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.641166 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.641179 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.651065 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.663602 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.689130 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:06Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.743901 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.743958 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.743967 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.743984 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.743995 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.803291 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:06 crc kubenswrapper[4713]: E0126 15:35:06.803490 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.846785 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.846828 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.846837 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.846854 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.846864 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.885307 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 11:12:44.625266719 +0000 UTC Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.949812 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.949873 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.949886 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.949909 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4713]: I0126 15:35:06.949923 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.052749 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.052811 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.052825 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.052850 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.052867 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.156280 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.156345 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.156388 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.156415 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.156434 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.259013 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.259063 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.259079 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.259100 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.259118 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.362234 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.362312 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.362334 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.362400 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.362420 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.397966 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/3.log" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.403064 4713 scope.go:117] "RemoveContainer" containerID="fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f" Jan 26 15:35:07 crc kubenswrapper[4713]: E0126 15:35:07.403300 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.425090 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.440941 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.454540 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fef6986044de1cc82fda7f41ffadb687ecdbc3047ddd68f2d4f21ee6698e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:56Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc\\\\n2026-01-26T15:34:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc to /host/opt/cni/bin/\\\\n2026-01-26T15:34:11Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:34:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.465247 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.465290 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.465299 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.465315 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.465330 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.473133 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.486517 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.499590 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51671fd-08ab-4ba8-a770-b08b39c4de88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24ccbe375d40bd63a664c32c9a308c1127bcd914d25bbfbb991bbdf0d7d3108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.515585 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.531008 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.542086 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.554680 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.567177 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.567218 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.567227 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.567242 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.567252 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.568435 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.581098 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.607526 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:06Z\\\",\\\"message\\\":\\\"66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0126 15:35:06.075501 6735 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:35:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.622974 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.635680 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dabd84d8-5a82-4789-b965-655386c271f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.653072 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.668060 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.670511 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.670559 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.670570 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.670588 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.670598 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.681717 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.694884 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:07Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.772663 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.772719 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.772731 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.772751 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.772763 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.803248 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.803247 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.803555 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:07 crc kubenswrapper[4713]: E0126 15:35:07.803489 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:07 crc kubenswrapper[4713]: E0126 15:35:07.803668 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:07 crc kubenswrapper[4713]: E0126 15:35:07.803812 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.875626 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.875689 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.875705 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.875725 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.875738 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.885758 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:47:13.047091285 +0000 UTC Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.978241 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.978283 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.978297 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.978328 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4713]: I0126 15:35:07.978340 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.080967 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.081298 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.081310 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.081327 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.081340 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.183872 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.183918 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.183931 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.183952 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.183967 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.286829 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.286894 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.286912 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.286940 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.286959 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.390828 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.390876 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.390885 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.390903 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.390915 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.497332 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.497407 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.497424 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.497446 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.497463 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.600621 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.600705 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.600732 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.600765 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.600790 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.704136 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.704174 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.704183 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.704197 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.704209 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.802534 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:08 crc kubenswrapper[4713]: E0126 15:35:08.802762 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.806802 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.806836 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.806846 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.806861 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.806873 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.886278 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:16:07.996176973 +0000 UTC Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.909861 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.909932 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.909949 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.909977 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4713]: I0126 15:35:08.909999 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.012984 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.013059 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.013079 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.013104 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.013122 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.116107 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.116143 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.116152 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.116166 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.116175 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.219328 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.219423 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.219441 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.219465 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.219483 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.321807 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.321918 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.321927 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.321940 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.321949 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.424238 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.424298 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.424309 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.424324 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.424333 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.527533 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.527587 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.527606 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.527629 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.527644 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.630147 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.630243 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.630283 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.630324 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.630353 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.733590 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.733648 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.733675 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.733702 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.733720 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.740938 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.740995 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.741103 4713 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.741170 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.741149831 +0000 UTC m=+148.878167066 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.741208 4713 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.741321 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.741293565 +0000 UTC m=+148.878310800 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.803181 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.803214 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.803407 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.803721 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.803838 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.803948 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.835762 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.835800 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.835810 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.835827 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.835840 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.886742 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 22:36:55.256603809 +0000 UTC Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.940113 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.940165 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.940176 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.940197 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.940209 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.942768 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.942975 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.942946723 +0000 UTC m=+149.079963948 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.944490 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:09 crc kubenswrapper[4713]: I0126 15:35:09.944543 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.944763 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.944804 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.944824 4713 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.944895 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.944874928 +0000 UTC m=+149.081892173 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.944971 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.944999 4713 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.945015 4713 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:35:09 crc kubenswrapper[4713]: E0126 15:35:09.945063 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.945048193 +0000 UTC m=+149.082065598 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.042315 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.042381 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.042392 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.042408 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.042417 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.146199 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.146269 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.146291 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.146321 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.146347 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.250257 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.250327 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.250354 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.250439 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.250466 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.353198 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.353240 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.353251 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.353270 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.353281 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.455397 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.455465 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.455485 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.455512 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.455530 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.558960 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.559024 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.559036 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.559054 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.559066 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.661770 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.661839 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.661861 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.661885 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.661903 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.764327 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.764386 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.764397 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.764410 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.764421 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.803086 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:10 crc kubenswrapper[4713]: E0126 15:35:10.803303 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.866338 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.866391 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.866401 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.866417 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.866430 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.887028 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:11:25.688900792 +0000 UTC Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.969542 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.969602 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.969614 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.969635 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4713]: I0126 15:35:10.969650 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.071967 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.072078 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.072096 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.072120 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.072137 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.174847 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.174963 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.174974 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.174992 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.175003 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.277397 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.277438 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.277448 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.277465 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.277476 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.381385 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.381628 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.381645 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.381670 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.381687 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.486243 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.486331 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.486356 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.486421 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.486444 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.590888 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.590941 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.590954 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.590977 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.590990 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.694576 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.694630 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.694641 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.694662 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.694675 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.797640 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.797698 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.797710 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.797729 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.797744 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.803492 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.803504 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.803646 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:11 crc kubenswrapper[4713]: E0126 15:35:11.803787 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:11 crc kubenswrapper[4713]: E0126 15:35:11.803936 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:11 crc kubenswrapper[4713]: E0126 15:35:11.804103 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.887461 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 11:25:02.190370456 +0000 UTC Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.901012 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.901055 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.901066 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.901081 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4713]: I0126 15:35:11.901093 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.004298 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.004388 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.004407 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.004431 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.004447 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.108589 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.108677 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.108701 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.108737 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.108756 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.212086 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.212153 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.212174 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.212202 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.212268 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.316131 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.316207 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.316232 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.316262 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.316285 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.419295 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.419400 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.419425 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.419455 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.419476 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.523644 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.523726 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.523744 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.523770 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.523789 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.627246 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.627305 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.627324 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.627345 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.627390 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.730509 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.730585 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.730602 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.730629 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.730648 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.802695 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:12 crc kubenswrapper[4713]: E0126 15:35:12.802902 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.833557 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.833631 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.833649 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.833674 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.833692 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.888020 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 20:30:33.350283785 +0000 UTC Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.936935 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.937008 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.937032 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.937062 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4713]: I0126 15:35:12.937085 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.039724 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.039801 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.039827 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.039857 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.039882 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.143557 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.143632 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.143667 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.143700 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.143723 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.248436 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.248525 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.248542 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.248567 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.248585 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.352588 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.352662 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.352694 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.352725 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.352750 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.455729 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.455798 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.455810 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.455831 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.455846 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.559122 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.559237 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.559261 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.559293 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.559314 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.662501 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.662550 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.662563 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.662588 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.662603 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.765798 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.765838 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.765849 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.765864 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.765875 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.802921 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.802956 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.802921 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:13 crc kubenswrapper[4713]: E0126 15:35:13.803666 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:13 crc kubenswrapper[4713]: E0126 15:35:13.803907 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:13 crc kubenswrapper[4713]: E0126 15:35:13.804101 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.846251 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.846588 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.846653 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.846726 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.846814 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4713]: E0126 15:35:13.865604 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.869096 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.869139 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.869153 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.869170 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.869184 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4713]: E0126 15:35:13.894324 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.894488 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:28:16.757142451 +0000 UTC Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.899896 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.899940 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.899951 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.899973 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.899986 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4713]: E0126 15:35:13.912573 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.916676 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.916718 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.916728 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.916759 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.916771 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4713]: E0126 15:35:13.928586 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.932328 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.932356 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.932401 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.932416 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.932425 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4713]: E0126 15:35:13.945103 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:13 crc kubenswrapper[4713]: E0126 15:35:13.945258 4713 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.947153 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.947177 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.947186 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.947198 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4713]: I0126 15:35:13.947206 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.050122 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.050169 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.050181 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.050205 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.050220 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.154036 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.154101 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.154125 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.154154 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.154177 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.257217 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.257297 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.257331 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.257411 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.257437 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.360215 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.360300 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.360325 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.360356 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.360469 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.463994 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.464052 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.464073 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.464095 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.464111 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.567778 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.567833 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.567849 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.567871 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.567889 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.671003 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.671066 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.671084 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.671109 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.671132 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.775073 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.775140 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.775160 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.775188 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.775211 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.802563 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:14 crc kubenswrapper[4713]: E0126 15:35:14.803054 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.878867 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.878929 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.878954 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.878987 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.879009 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.896051 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 20:47:34.568034641 +0000 UTC Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.982615 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.982685 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.982705 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.982731 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4713]: I0126 15:35:14.982749 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.085702 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.085740 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.085750 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.085765 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.085777 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.188481 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.188567 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.188606 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.188638 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.188658 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.292207 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.292830 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.292914 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.293126 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.293225 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.396099 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.396151 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.396162 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.396181 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.396194 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.499173 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.499270 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.499293 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.499329 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.499352 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.602967 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.603420 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.603595 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.603769 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.603927 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.708148 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.708250 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.708269 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.708296 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.708316 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.803579 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.803672 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:15 crc kubenswrapper[4713]: E0126 15:35:15.803747 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:15 crc kubenswrapper[4713]: E0126 15:35:15.803924 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.803986 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:15 crc kubenswrapper[4713]: E0126 15:35:15.804798 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.811437 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.811469 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.811477 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.811496 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.811506 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.831192 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.851759 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.885751 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.896292 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:03:58.58729937 +0000 UTC Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.902327 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.913247 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.913305 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.913321 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.913347 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.913432 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.921123 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fef6986044de1cc82fda7f41ffadb687ecdbc3047ddd68f2d4f21ee6698e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:56Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc\\\\n2026-01-26T15:34:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc to /host/opt/cni/bin/\\\\n2026-01-26T15:34:11Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:34:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.933214 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.946271 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51671fd-08ab-4ba8-a770-b08b39c4de88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24ccbe375d40bd63a664c32c9a308c1127bcd914d25bbfbb991bbdf0d7d3108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.963966 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.976908 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4713]: I0126 15:35:15.994610 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:06Z\\\",\\\"message\\\":\\\"66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0126 15:35:06.075501 6735 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:35:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.006415 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.017440 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.017533 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.017543 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.017562 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.017574 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.018240 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.031553 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.046180 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.057216 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.067625 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.080239 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.093800 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dabd84d8-5a82-4789-b965-655386c271f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.107161 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:16Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.120169 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.120209 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.120223 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.120240 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.120252 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.224067 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.224140 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.224153 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.224172 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.224205 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.327224 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.327287 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.327301 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.327320 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.327335 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.431921 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.431995 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.432023 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.432055 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.432079 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.535024 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.535091 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.535112 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.535142 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.535161 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.638202 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.638244 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.638260 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.638282 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.638296 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.741640 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.741699 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.741719 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.741749 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.741767 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.803480 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:16 crc kubenswrapper[4713]: E0126 15:35:16.803743 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.845092 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.845158 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.845176 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.845199 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.845220 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.897399 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:56:42.577791443 +0000 UTC Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.948955 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.949016 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.949025 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.949040 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4713]: I0126 15:35:16.949052 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.052867 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.052927 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.052943 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.052968 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.052986 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.155977 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.156056 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.156081 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.156105 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.156124 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.258773 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.258835 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.258875 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.258908 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.258932 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.362908 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.362983 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.363004 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.363031 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.363050 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.466098 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.466184 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.466207 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.466237 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.466263 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.569704 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.569777 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.569790 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.569811 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.569830 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.674254 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.674310 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.674321 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.674341 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.674355 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.778511 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.778581 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.778597 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.778619 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.778632 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.803309 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.803327 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.803394 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:17 crc kubenswrapper[4713]: E0126 15:35:17.803965 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:17 crc kubenswrapper[4713]: E0126 15:35:17.804122 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:17 crc kubenswrapper[4713]: E0126 15:35:17.804260 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.881188 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.881319 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.881347 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.881431 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.881458 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.898517 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:02:46.097020349 +0000 UTC Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.985493 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.985551 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.985563 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.985581 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4713]: I0126 15:35:17.985594 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.088247 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.088333 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.088347 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.088383 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.088400 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.192345 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.192461 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.192487 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.192519 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.192542 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.296874 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.297408 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.297421 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.297442 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.297455 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.400479 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.400528 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.400538 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.400557 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.400569 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.503392 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.503439 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.503450 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.503466 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.503478 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.607173 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.607232 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.607253 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.607279 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.607302 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.710829 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.710930 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.710955 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.710991 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.711014 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.803251 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:18 crc kubenswrapper[4713]: E0126 15:35:18.803544 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.813669 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.813750 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.813773 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.813804 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.813830 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.899055 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:03:28.930037552 +0000 UTC Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.916266 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.916345 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.916385 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.916412 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4713]: I0126 15:35:18.916429 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.019533 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.019630 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.019658 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.019690 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.019714 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.121982 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.122039 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.122056 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.122079 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.122096 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.225192 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.225266 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.225278 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.225298 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.225312 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.333257 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.333390 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.333411 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.333437 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.333454 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.435911 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.435955 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.435967 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.435983 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.435995 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.538967 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.539027 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.539041 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.539061 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.539077 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.642302 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.642379 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.642390 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.642432 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.642443 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.745939 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.745994 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.746007 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.746026 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.746039 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.803651 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:19 crc kubenswrapper[4713]: E0126 15:35:19.803796 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.803886 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.804446 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:19 crc kubenswrapper[4713]: E0126 15:35:19.804613 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:19 crc kubenswrapper[4713]: E0126 15:35:19.804719 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.805070 4713 scope.go:117] "RemoveContainer" containerID="fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f" Jan 26 15:35:19 crc kubenswrapper[4713]: E0126 15:35:19.805412 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.849680 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.849752 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.849771 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.849798 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.849822 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.900238 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 11:31:15.982534828 +0000 UTC Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.953318 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.953388 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.953401 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.953421 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4713]: I0126 15:35:19.953433 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.056811 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.056861 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.056873 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.056892 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.056908 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.159338 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.159428 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.159441 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.159465 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.159481 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.262009 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.262056 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.262064 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.262077 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.262095 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.365177 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.365226 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.365236 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.365255 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.365265 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.468082 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.468119 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.468129 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.468146 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.468159 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.571222 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.571294 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.571307 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.571330 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.571343 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.673953 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.673998 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.674010 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.674029 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.674042 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.778533 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.778604 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.778621 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.778643 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.778658 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.802903 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:20 crc kubenswrapper[4713]: E0126 15:35:20.803272 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.881861 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.881914 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.881926 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.881947 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.881959 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.901186 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:54:26.831416398 +0000 UTC Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.985142 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.985181 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.985191 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.985207 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4713]: I0126 15:35:20.985219 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.087313 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.087399 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.087419 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.087438 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.087463 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.190520 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.190577 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.190600 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.190625 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.190640 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.293218 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.293249 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.293259 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.293277 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.293289 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.396118 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.396170 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.396186 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.396206 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.396220 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.500124 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.500179 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.500195 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.500214 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.500233 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.602815 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.603167 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.603403 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.603633 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.603860 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.707941 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.707989 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.708000 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.708019 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.708034 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.803089 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.803185 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.803725 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:21 crc kubenswrapper[4713]: E0126 15:35:21.803913 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:21 crc kubenswrapper[4713]: E0126 15:35:21.804101 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:21 crc kubenswrapper[4713]: E0126 15:35:21.804296 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.810594 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.810647 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.810664 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.810689 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.810706 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.903740 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:42:19.694653053 +0000 UTC Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.914175 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.914209 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.914218 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.914232 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4713]: I0126 15:35:21.914242 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.017192 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.017654 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.017831 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.017980 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.018194 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.121603 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.121673 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.121691 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.121720 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.121738 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.224675 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.224732 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.224747 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.224770 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.224785 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.328327 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.328435 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.328456 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.328481 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.328504 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.431707 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.431752 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.431764 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.431781 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.431793 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.534735 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.534796 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.534814 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.534840 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.534858 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.638089 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.638154 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.638175 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.638203 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.638224 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.742130 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.742187 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.742204 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.742228 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.742245 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.803612 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:22 crc kubenswrapper[4713]: E0126 15:35:22.803829 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.845786 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.845852 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.845864 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.845883 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.845896 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.905441 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 18:05:49.55513218 +0000 UTC Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.948985 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.949038 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.949051 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.949072 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4713]: I0126 15:35:22.949086 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.051862 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.051924 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.051940 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.051959 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.051973 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.155504 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.155579 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.155598 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.155624 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.155645 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.259062 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.259127 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.259148 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.259179 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.259205 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.362482 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.362574 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.362601 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.362637 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.362660 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.465224 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.465275 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.465287 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.465306 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.465318 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.568624 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.568705 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.568724 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.568747 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.568765 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.671403 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.671450 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.671473 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.671493 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.671505 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.776794 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.776858 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.776872 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.776893 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.776907 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.803592 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:23 crc kubenswrapper[4713]: E0126 15:35:23.803873 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.804323 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.804358 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:23 crc kubenswrapper[4713]: E0126 15:35:23.804463 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:23 crc kubenswrapper[4713]: E0126 15:35:23.804698 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.880162 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.880230 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.880248 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.880273 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.880321 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.906463 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 22:18:29.814231982 +0000 UTC Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.986338 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.986434 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.986454 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.986478 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4713]: I0126 15:35:23.986498 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.089628 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.089683 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.089693 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.089710 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.089721 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.155645 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.155711 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.155728 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.155755 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.155774 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4713]: E0126 15:35:24.172648 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.178509 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.178561 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.178598 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.178618 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.178631 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4713]: E0126 15:35:24.194757 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.200546 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.200613 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.200637 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.200668 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.200692 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4713]: E0126 15:35:24.223512 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.229981 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.230028 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.230043 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.230061 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.230076 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4713]: E0126 15:35:24.246453 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.252258 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.252319 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.252337 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.252383 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.252403 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4713]: E0126 15:35:24.271407 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4713]: E0126 15:35:24.271657 4713 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.273640 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.273681 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.273691 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.273708 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.273720 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.377584 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.377636 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.377650 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.377669 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.377683 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.481261 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.481319 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.481336 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.481416 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.481452 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.585200 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.585255 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.585269 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.585290 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.585306 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.688050 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.688101 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.688111 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.688138 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.688152 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.791030 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.791092 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.791110 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.791134 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.791156 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.803510 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:24 crc kubenswrapper[4713]: E0126 15:35:24.803722 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.894727 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.894786 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.894803 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.894858 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.894877 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.907745 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:19:50.808841442 +0000 UTC Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.998030 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.998151 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.998177 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.998206 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4713]: I0126 15:35:24.998227 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.100578 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.100612 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.100622 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.100638 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.100649 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.203413 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.203448 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.203456 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.203470 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.203479 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.306410 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.306456 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.306465 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.306481 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.306492 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.342116 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs\") pod \"network-metrics-daemon-4vgps\" (UID: \"6f185439-f527-44bf-8362-a9cf40e00d3c\") " pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:25 crc kubenswrapper[4713]: E0126 15:35:25.342302 4713 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:35:25 crc kubenswrapper[4713]: E0126 15:35:25.342407 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs podName:6f185439-f527-44bf-8362-a9cf40e00d3c nodeName:}" failed. No retries permitted until 2026-01-26 15:36:29.342352613 +0000 UTC m=+164.479369858 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs") pod "network-metrics-daemon-4vgps" (UID: "6f185439-f527-44bf-8362-a9cf40e00d3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.409889 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.409934 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.409945 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.409962 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.409974 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.512867 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.512940 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.512955 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.512996 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.513008 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.616016 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.616068 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.616080 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.616102 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.616115 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.719571 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.719682 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.719704 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.719734 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.719756 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.802884 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.802960 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.802926 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:25 crc kubenswrapper[4713]: E0126 15:35:25.803289 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:25 crc kubenswrapper[4713]: E0126 15:35:25.803624 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:25 crc kubenswrapper[4713]: E0126 15:35:25.803766 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.824851 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.824863 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.824917 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.825048 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.825070 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.825083 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.843747 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.862957 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.887394 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:06Z\\\",\\\"message\\\":\\\"66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0126 15:35:06.075501 6735 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:35:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.902747 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.907898 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:35:43.709139414 +0000 UTC Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.923160 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.929422 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.929506 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.929524 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.929573 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.929592 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.940131 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.952922 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dabd84d8-5a82-4789-b965-655386c271f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.970729 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:25 crc kubenswrapper[4713]: I0126 15:35:25.986407 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.001659 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.022505 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.032340 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.032396 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.032410 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.032433 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.032476 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.036501 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.057475 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fef6986044de1cc82fda7f41ffadb687ecdbc3047ddd68f2d4f21ee6698e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:56Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc\\\\n2026-01-26T15:34:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc to /host/opt/cni/bin/\\\\n2026-01-26T15:34:11Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:34:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.081289 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.093605 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51671fd-08ab-4ba8-a770-b08b39c4de88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24ccbe375d40bd63a664c32c9a308c1127bcd914d25bbfbb991bbdf0d7d3108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.108475 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.122670 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.136064 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.136611 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.136672 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.136684 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.136703 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.136716 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.238945 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.239012 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.239034 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.239068 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.239091 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.343189 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.343243 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.343255 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.343273 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.343285 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.446320 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.446424 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.446442 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.446465 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.446483 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.550093 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.550151 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.550164 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.550183 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.550195 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.653062 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.653110 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.653121 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.653138 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.653151 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.756556 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.756612 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.756625 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.756648 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.756663 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.802497 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:26 crc kubenswrapper[4713]: E0126 15:35:26.802647 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.859533 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.859567 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.859575 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.859609 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.859620 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.908073 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:47:11.537487879 +0000 UTC Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.961835 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.961899 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.961917 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.961942 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4713]: I0126 15:35:26.961959 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.064177 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.064218 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.064227 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.064242 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.064253 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.166765 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.166808 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.166846 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.166862 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.166873 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.270402 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.270441 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.270449 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.270463 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.270472 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.373793 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.373836 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.373866 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.373889 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.373904 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.476208 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.476250 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.476262 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.476277 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.476287 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.579376 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.579421 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.579431 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.579451 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.579473 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.682598 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.682648 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.682658 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.682681 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.682696 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.786527 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.786583 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.786599 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.786623 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.786639 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.802695 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.802793 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:27 crc kubenswrapper[4713]: E0126 15:35:27.802849 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.803050 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:27 crc kubenswrapper[4713]: E0126 15:35:27.803048 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:27 crc kubenswrapper[4713]: E0126 15:35:27.803105 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.890036 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.890082 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.890093 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.890112 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.890122 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.909262 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 01:42:48.284626558 +0000 UTC Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.993902 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.993956 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.993970 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.993995 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4713]: I0126 15:35:27.994009 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.096581 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.096911 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.097112 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.097271 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.097436 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.201479 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.201545 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.201566 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.201592 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.201613 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.304456 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.304501 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.304510 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.304529 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.304540 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.407530 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.407577 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.407586 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.407606 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.407619 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.511404 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.511487 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.511516 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.511543 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.511561 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.614801 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.614884 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.614914 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.614946 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.614971 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.718258 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.718320 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.718336 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.718408 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.718453 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.802850 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:28 crc kubenswrapper[4713]: E0126 15:35:28.803193 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.821229 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.821273 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.821299 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.821316 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.821328 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.910423 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:31:29.998846779 +0000 UTC Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.928933 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.928994 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.929007 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.929025 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4713]: I0126 15:35:28.929042 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.032221 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.032254 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.032266 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.032281 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.032293 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.134754 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.134794 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.134807 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.134825 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.134835 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.237824 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.238145 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.238248 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.238330 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.238458 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.342056 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.342448 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.342620 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.342725 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.342833 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.445816 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.446205 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.446354 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.446538 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.446660 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.549182 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.549643 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.549810 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.549950 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.550108 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.653678 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.653739 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.653757 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.653780 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.653797 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.757395 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.757472 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.757500 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.757606 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.757664 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.803452 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.803540 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.803461 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:29 crc kubenswrapper[4713]: E0126 15:35:29.803680 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:29 crc kubenswrapper[4713]: E0126 15:35:29.803781 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:29 crc kubenswrapper[4713]: E0126 15:35:29.803934 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.860158 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.860644 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.860802 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.860960 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.861108 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.911169 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:36:04.414122022 +0000 UTC Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.964235 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.964316 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.964336 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.964406 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4713]: I0126 15:35:29.964428 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.067219 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.067279 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.067295 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.067315 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.067330 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.170017 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.170103 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.170121 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.170154 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.170176 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.273769 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.273819 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.273831 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.273848 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.273861 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.377035 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.377074 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.377085 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.377101 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.377112 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.481121 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.481186 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.481204 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.481234 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.481252 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.585286 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.585396 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.585423 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.585455 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.585479 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.688388 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.688454 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.688472 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.688496 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.688513 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.791788 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.791853 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.791868 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.791892 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.791910 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.803533 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:30 crc kubenswrapper[4713]: E0126 15:35:30.803779 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.895568 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.895650 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.895675 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.895707 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.895735 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.912842 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 06:42:11.773865197 +0000 UTC Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.998795 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.998923 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.998941 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.998965 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4713]: I0126 15:35:30.998982 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.101974 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.102014 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.102022 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.102038 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.102048 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.205289 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.205449 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.205488 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.205512 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.205530 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.309283 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.309352 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.309403 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.309436 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.309459 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.412539 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.412607 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.412625 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.412651 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.412669 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.516041 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.516139 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.516180 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.516221 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.516247 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.619295 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.619359 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.619437 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.619470 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.619495 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.721889 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.721946 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.721961 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.721980 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.721994 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.803010 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.803010 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.803550 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:31 crc kubenswrapper[4713]: E0126 15:35:31.803616 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:31 crc kubenswrapper[4713]: E0126 15:35:31.803549 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:31 crc kubenswrapper[4713]: E0126 15:35:31.803774 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.803854 4713 scope.go:117] "RemoveContainer" containerID="fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f" Jan 26 15:35:31 crc kubenswrapper[4713]: E0126 15:35:31.804641 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.824161 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.824209 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.824218 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.824235 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.824245 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.913676 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 07:48:15.22946264 +0000 UTC Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.927558 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.927634 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.927653 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.927678 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4713]: I0126 15:35:31.927697 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.030670 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.030728 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.030745 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.030769 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.030788 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.134201 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.134264 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.134289 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.134318 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.134339 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.237882 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.237928 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.237937 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.237956 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.237966 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.340897 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.340969 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.340989 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.341012 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.341031 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.444284 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.444416 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.444457 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.444489 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.444511 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.548173 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.548235 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.548250 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.548270 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.548285 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.652148 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.652221 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.652245 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.652277 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.652305 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.756403 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.756478 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.756503 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.756532 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.756554 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.803055 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:32 crc kubenswrapper[4713]: E0126 15:35:32.803261 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.859885 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.859955 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.859995 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.860035 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.860064 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.914881 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 08:38:24.991429726 +0000 UTC Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.963203 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.963247 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.963257 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.963273 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4713]: I0126 15:35:32.963283 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.066285 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.066400 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.066423 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.066449 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.066468 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.169344 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.169443 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.169462 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.169486 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.169505 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.272029 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.272094 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.272111 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.272130 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.272141 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.375351 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.375437 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.375452 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.375474 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.375489 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.479329 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.479456 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.479513 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.479541 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.479560 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.582900 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.582978 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.583000 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.583024 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.583044 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.686927 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.686996 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.687021 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.687050 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.687075 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.789382 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.789427 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.789439 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.789458 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.789472 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.802957 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.803072 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:33 crc kubenswrapper[4713]: E0126 15:35:33.803099 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.803174 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:33 crc kubenswrapper[4713]: E0126 15:35:33.803271 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:33 crc kubenswrapper[4713]: E0126 15:35:33.803324 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.892068 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.892135 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.892148 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.892169 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.892183 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.915411 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:34:34.327652957 +0000 UTC Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.994402 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.994444 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.994452 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.994465 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4713]: I0126 15:35:33.994475 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.097631 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.097714 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.097732 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.097759 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.097779 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.201030 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.201099 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.201159 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.201190 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.201212 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.304891 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.305013 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.305031 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.305056 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.305074 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.408235 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.408322 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.408395 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.408432 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.408458 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.512039 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.512091 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.512108 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.512131 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.512148 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.602472 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.602545 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.602563 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.602586 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.602606 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4713]: E0126 15:35:34.626012 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.631602 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.631700 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.631727 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.631758 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.631781 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4713]: E0126 15:35:34.651191 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.656588 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.656666 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.656681 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.656700 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.656714 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4713]: E0126 15:35:34.677230 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.682777 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.682838 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.682853 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.682873 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.682888 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4713]: E0126 15:35:34.701864 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.706679 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.706736 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.706753 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.706778 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.706795 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4713]: E0126 15:35:34.724204 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bc24a9ed-92e4-4376-95db-334eab04cd6c\\\",\\\"systemUUID\\\":\\\"6411f4a9-0074-492c-9c99-d43928c7d95b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:34 crc kubenswrapper[4713]: E0126 15:35:34.724462 4713 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.725924 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.725955 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.725963 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.725978 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.725987 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.803302 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:34 crc kubenswrapper[4713]: E0126 15:35:34.803585 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.829843 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.829930 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.829956 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.829990 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.830011 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.916518 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:09:05.396291138 +0000 UTC Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.933311 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.933379 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.933395 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.933411 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4713]: I0126 15:35:34.933422 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.036790 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.036845 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.036857 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.036877 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.036890 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.140006 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.140086 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.140105 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.140128 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.140145 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.242024 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.242068 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.242078 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.242092 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.242102 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.345352 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.345410 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.345422 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.345437 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.345452 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.448730 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.448771 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.448782 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.448799 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.448811 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.551622 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.551669 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.551683 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.551701 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.551713 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.655182 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.655242 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.655258 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.655280 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.655298 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.758452 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.758512 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.758532 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.758557 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.758575 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.803105 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.803179 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:35 crc kubenswrapper[4713]: E0126 15:35:35.803266 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.803299 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:35 crc kubenswrapper[4713]: E0126 15:35:35.803465 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:35 crc kubenswrapper[4713]: E0126 15:35:35.803566 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.821756 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1adeb0fc-d499-4810-b17e-a95da1961946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb916ca0057763325aca746682b0906d4e27870623152e912616a83a74628b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aaf8654af61794fc6a33871c42a5dd8aef5c1b36496619559f8b313d1b56420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56450d07f46f9e956106107e96aa50dc0b142117e5fb21428f8dcd8169f062b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.838941 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dabd84d8-5a82-4789-b965-655386c271f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315a8386cb3cbd006aee6a09042dbfa7135d59bc39b496a90177347648cd2f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f32224c9cf56b039b6ef091eeded5068b40b25456a289d897129ed6f4c0f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c6d413db29a00a2686063774e9ce1c81358f692ab8fd6a23bcddbe2213cdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1518adb2f1c9057974b74fcd257dc2799887f5b6f28f716b45e6d6d1e31fb8ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.857681 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.865557 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.865599 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.865611 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.865632 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.865643 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.878627 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.889913 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fgqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc036917-2d57-4b40-a5b1-21b68b1f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e42e9be6a2a527af7557aafe14843d9faf1c8df6a9cd6216fee7ad50b053dfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znkxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fgqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.903508 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83374550-b354-4961-8649-e679b13e36e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff7c3b623b354938cf961d21ada8d23b4a4197cf4467d32a57d3c07ea9ac7fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af566c157ffb53c424e8a5a4c76714a09c045c0a4e7d21395dbd8700463bdc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v56q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-92r8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.917023 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 13:18:27.046739157 +0000 UTC Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.929676 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee5f8c9-be13-4820-b328-ca26e1b7a77c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b914df39746499b8233b0b7c7225536cbc8910a16563046005355a416a07239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb065d51225821e996a5dda15f154519d650870322be68a2bd68e8d337367241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e522354b22cb28bac65828cd30e6883cbe56db8585c3b9461d04cdc0c9293370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d953aaffecc105052be09c9c79e9ef85f2bbda415b933c23774e3d49b4413140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9550d3da634c4e884f435f7e9c851d2e40c3a86a47699cc2b5408da68a858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f120b4869d7b5cd664d478b544aecd35938d05e4ab5317f471272da9ce873e07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68fd76734b6b9b97bc99142424d243b78ddb17a1932c1aeb583f11f556b16dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc07aeb10910511bfa3fda4ec6ee0210526697c5dce14eb94b55b4c5a1223a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.942184 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4eae7fd13e54d5e3c78aa92c14903b3b08301f9d17f553b2c49dbc10a814bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.955434 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4ld7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d21f731c-7a63-4c3c-bdc5-9267197741d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fef6986044de1cc82fda7f41ffadb687ecdbc3047ddd68f2d4f21ee6698e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:56Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc\\\\n2026-01-26T15:34:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e307570e-51cc-4dde-b126-26e74ab1a0cc to /host/opt/cni/bin/\\\\n2026-01-26T15:34:11Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:34:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8524\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4ld7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.968389 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.968421 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.968429 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.968442 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.968452 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.972396 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059cbb92-ce39-4fb3-8a36-0fb66e359701\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be852816f7fd58db071f283fa0690d59a44cabc4fc7755595bc698d7150dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f30368eeaa01cf6ed2b381887d0902f0d5e12cf8704381e25d554340b1af4e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6440ee48390c206ef124e34f05a2cdcb13208a23f550a416eb61e92a2a57d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2142d2689a81b2fde07ac22720c75f36257c85dd5e9c95e0cab2e9a69751564b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61a06594beb671b230d33c7dfba2a49c544971687e2612b1810a006af73e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e446d9a9971495bded9d9649074fb2bb5079d232286ab0a6ac946a5198eb7486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bedc8494e4510380be08cdde9972b8f5c4123169399c9034507134528471add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5gf9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.987473 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vgps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f185439-f527-44bf-8362-a9cf40e00d3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2q5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vgps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:35 crc kubenswrapper[4713]: I0126 15:35:35.999352 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51671fd-08ab-4ba8-a770-b08b39c4de88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24ccbe375d40bd63a664c32c9a308c1127bcd914d25bbfbb991bbdf0d7d3108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e868019d790102bb4cbee81534a18c31c978670a5ef7a7c368020930c437f32a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.018612 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93543424-4011-4a77-a471-5f0ef9989535\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0126 15:34:05.713090 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 15:34:05.713253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 15:34:05.715266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2710445253/tls.crt::/tmp/serving-cert-2710445253/tls.key\\\\\\\"\\\\nI0126 15:34:06.253798 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 15:34:06.614264 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 15:34:06.614298 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 15:34:06.614328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 15:34:06.614335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 15:34:06.730615 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 15:34:06.730752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 15:34:06.730802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 15:34:06.730823 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 15:34:06.730845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 15:34:06.730867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 15:34:06.730827 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 15:34:06.737528 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:33:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:33:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:36Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.039076 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8ef013b2a7ba3df83c1f04e7d3272bbf8fd9a37702973ec6a41b5a79bb2d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee694e9d6d1df984c2af9e5a327866aa62919976e279608eea6b4d22ac6e735e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:36Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.051460 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t2rqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46e41399-4ca1-47ca-8151-b953f284e096\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f1115f48edf28a9a1c857499ab627dfb3c620c75d6a7d161197a4e7e9d62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t2rqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:36Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.066985 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2290842d3299c2cdac60e5ed4b9d646c6cc5e4b43e835fbec06f32989303f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:36Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.071800 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.071833 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.071844 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.071862 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.071875 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.083228 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:36Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.097518 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f608dd80-4cbf-4490-b062-2bef233d25d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec33ddadf9fefa80fa1810508738b625d85258d3e3a5208bf46ba747c92948d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2hcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tn7l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:36Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.116259 4713 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:06Z\\\",\\\"message\\\":\\\"66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0126 15:35:06.075501 6735 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:35:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2drw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:36Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.173891 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.173945 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.173961 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.173980 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.173992 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.276806 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.277295 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.277537 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.277754 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.277964 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.381161 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.381208 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.381221 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.381242 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.381255 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.484101 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.484164 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.484177 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.484219 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.484235 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.586862 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.586948 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.586956 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.586971 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.586986 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.689591 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.689657 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.689675 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.689699 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.689719 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.792254 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.792313 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.792330 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.792359 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.792387 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.802925 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:36 crc kubenswrapper[4713]: E0126 15:35:36.803187 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.895132 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.895208 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.895277 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.895304 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.895323 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.918130 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:43:40.948769064 +0000 UTC Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.998335 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.998416 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.998433 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.998452 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4713]: I0126 15:35:36.998462 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.101748 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.101793 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.101805 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.101825 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.101837 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.204082 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.204186 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.204217 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.204232 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.204242 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.307123 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.307197 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.307208 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.307224 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.307236 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.410285 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.410320 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.410333 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.410349 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.410387 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.511776 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.511820 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.511835 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.511852 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.511863 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.613964 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.614043 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.614064 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.614090 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.614110 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.717782 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.717853 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.717870 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.717903 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.717921 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.803028 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.803169 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.803231 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:37 crc kubenswrapper[4713]: E0126 15:35:37.803231 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:37 crc kubenswrapper[4713]: E0126 15:35:37.803433 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:37 crc kubenswrapper[4713]: E0126 15:35:37.803609 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.821129 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.821197 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.821218 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.821245 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.821265 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.919315 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:27:12.195876227 +0000 UTC Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.925334 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.925388 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.925398 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.925413 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4713]: I0126 15:35:37.925424 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.028102 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.028189 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.028213 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.028246 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.028272 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.131045 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.131116 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.131130 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.131185 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.131203 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.233793 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.233951 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.233977 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.234049 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.234070 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.337855 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.337935 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.337962 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.337994 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.338016 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.441026 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.441091 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.441115 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.441148 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.441169 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.544509 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.544570 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.544584 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.544605 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.544619 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.648005 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.648093 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.648109 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.648155 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.648169 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.751618 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.751665 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.751676 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.751693 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.751705 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.803554 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:38 crc kubenswrapper[4713]: E0126 15:35:38.804049 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.854940 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.855006 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.855022 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.855042 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.855055 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.919807 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 04:48:43.941574218 +0000 UTC Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.957144 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.957181 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.957192 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.957208 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4713]: I0126 15:35:38.957219 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.060835 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.060892 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.060903 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.060921 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.060933 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.163741 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.163801 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.163816 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.163833 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.163846 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.266563 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.266605 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.266616 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.266634 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.266646 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.369145 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.369196 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.369208 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.369227 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.369240 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.471793 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.471839 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.471848 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.471865 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.471874 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.574330 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.574394 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.574406 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.574422 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.574435 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.677633 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.677717 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.677744 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.677773 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.677795 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.780883 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.780935 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.780947 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.780976 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.781002 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.802618 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.802722 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.802618 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:39 crc kubenswrapper[4713]: E0126 15:35:39.802808 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:39 crc kubenswrapper[4713]: E0126 15:35:39.802901 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:39 crc kubenswrapper[4713]: E0126 15:35:39.802958 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.883995 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.884069 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.884084 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.884105 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.884119 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.920602 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:01:01.262975867 +0000 UTC Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.987688 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.987753 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.987773 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.987799 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4713]: I0126 15:35:39.987818 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.090903 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.090946 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.090957 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.090974 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.090990 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.194016 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.194070 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.194086 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.194106 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.194118 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.304061 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.304684 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.304709 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.304728 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.304744 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.407836 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.407904 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.407916 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.407941 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.407954 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.511227 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.511338 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.511423 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.511460 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.511483 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.615031 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.615118 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.615127 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.615144 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.615175 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.718223 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.718279 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.718291 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.718312 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.718325 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.802656 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:40 crc kubenswrapper[4713]: E0126 15:35:40.802832 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.820764 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.820828 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.820839 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.820859 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.820872 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.921608 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:50:51.58703769 +0000 UTC Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.923388 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.923424 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.923437 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.923457 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4713]: I0126 15:35:40.923468 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.025992 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.026033 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.026041 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.026054 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.026064 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.129961 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.130058 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.130081 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.130106 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.130124 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.232787 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.232882 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.232908 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.232940 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.232960 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.336632 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.336693 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.336714 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.336755 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.336804 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.440391 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.440460 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.440478 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.440505 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.440522 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.543393 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.543449 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.543466 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.543489 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.543506 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.646142 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.646195 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.646206 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.646222 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.646234 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.750493 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.750556 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.750570 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.750589 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.750601 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.803141 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.803268 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:41 crc kubenswrapper[4713]: E0126 15:35:41.803320 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:41 crc kubenswrapper[4713]: E0126 15:35:41.804263 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.804416 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:41 crc kubenswrapper[4713]: E0126 15:35:41.804560 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.853435 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.853481 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.853491 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.853510 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.853520 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.922602 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:27:05.816781532 +0000 UTC Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.956935 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.956977 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.956990 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.957008 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4713]: I0126 15:35:41.957022 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.059663 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.059703 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.059712 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.059725 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.059734 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.162787 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.162859 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.162879 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.162905 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.162925 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.265329 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.265428 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.265453 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.265481 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.265499 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.368672 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.368761 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.368800 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.368834 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.368859 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.472480 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.472572 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.472603 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.472636 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.472659 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.574810 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.574862 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.574871 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.574886 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.574897 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.678096 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.678186 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.678248 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.678273 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.678287 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.781566 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.781633 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.781650 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.781674 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.781697 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.803549 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:42 crc kubenswrapper[4713]: E0126 15:35:42.803673 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.884931 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.884986 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.885001 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.885021 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.885036 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.922780 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 02:22:42.83553146 +0000 UTC Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.987855 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.987920 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.987934 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.987953 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4713]: I0126 15:35:42.987969 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.091424 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.091477 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.091492 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.091537 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.091557 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.195294 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.195426 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.195447 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.195472 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.195491 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.298003 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.298063 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.298077 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.298097 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.298113 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.400780 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.400853 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.400870 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.400903 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.400923 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.504707 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.504765 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.504783 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.504809 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.504827 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.535959 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4ld7b_d21f731c-7a63-4c3c-bdc5-9267197741d4/kube-multus/1.log" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.536822 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4ld7b_d21f731c-7a63-4c3c-bdc5-9267197741d4/kube-multus/0.log" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.536893 4713 generic.go:334] "Generic (PLEG): container finished" podID="d21f731c-7a63-4c3c-bdc5-9267197741d4" containerID="81fef6986044de1cc82fda7f41ffadb687ecdbc3047ddd68f2d4f21ee6698e77" exitCode=1 Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.536937 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4ld7b" event={"ID":"d21f731c-7a63-4c3c-bdc5-9267197741d4","Type":"ContainerDied","Data":"81fef6986044de1cc82fda7f41ffadb687ecdbc3047ddd68f2d4f21ee6698e77"} Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.536985 4713 scope.go:117] "RemoveContainer" containerID="5ed51cc2dbf0881837910293ab5e8633483a015fc95d4b1c245364ea21abda79" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.537426 4713 scope.go:117] "RemoveContainer" containerID="81fef6986044de1cc82fda7f41ffadb687ecdbc3047ddd68f2d4f21ee6698e77" Jan 26 15:35:43 crc kubenswrapper[4713]: E0126 15:35:43.537623 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-4ld7b_openshift-multus(d21f731c-7a63-4c3c-bdc5-9267197741d4)\"" pod="openshift-multus/multus-4ld7b" podUID="d21f731c-7a63-4c3c-bdc5-9267197741d4" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.631690 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.632050 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.632064 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.632083 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.632098 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.640839 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=94.640812596 podStartE2EDuration="1m34.640812596s" podCreationTimestamp="2026-01-26 15:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:43.60260227 +0000 UTC m=+118.739619555" watchObservedRunningTime="2026-01-26 15:35:43.640812596 +0000 UTC m=+118.777829861" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.728090 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5gf9s" podStartSLOduration=97.72806490400001 podStartE2EDuration="1m37.728064904s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:43.708265652 +0000 UTC m=+118.845282897" watchObservedRunningTime="2026-01-26 15:35:43.728064904 +0000 UTC m=+118.865082139" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.735008 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.735044 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.735054 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.735070 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.735082 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.741462 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=51.741442974 podStartE2EDuration="51.741442974s" podCreationTimestamp="2026-01-26 15:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:43.741169347 +0000 UTC m=+118.878186582" watchObservedRunningTime="2026-01-26 15:35:43.741442974 +0000 UTC m=+118.878460209" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.773088 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=95.773062763 podStartE2EDuration="1m35.773062763s" podCreationTimestamp="2026-01-26 15:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:43.757737907 +0000 UTC m=+118.894755162" watchObservedRunningTime="2026-01-26 15:35:43.773062763 +0000 UTC m=+118.910079988" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.803133 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.803169 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.803136 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:43 crc kubenswrapper[4713]: E0126 15:35:43.803260 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.803449 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-t2rqh" podStartSLOduration=97.803435925 podStartE2EDuration="1m37.803435925s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:43.785623639 +0000 UTC m=+118.922640884" watchObservedRunningTime="2026-01-26 15:35:43.803435925 +0000 UTC m=+118.940453160" Jan 26 15:35:43 crc kubenswrapper[4713]: E0126 15:35:43.803458 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:43 crc kubenswrapper[4713]: E0126 15:35:43.803476 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.836968 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.837004 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.837015 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.837030 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.837041 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.909988 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podStartSLOduration=97.909960811 podStartE2EDuration="1m37.909960811s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:43.874744321 +0000 UTC m=+119.011761556" watchObservedRunningTime="2026-01-26 15:35:43.909960811 +0000 UTC m=+119.046978076" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.923422 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:37:25.434108418 +0000 UTC Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.939002 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.939046 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.939058 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.939076 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.939089 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.958813 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=95.958780038 podStartE2EDuration="1m35.958780038s" podCreationTimestamp="2026-01-26 15:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:43.927775207 +0000 UTC m=+119.064792442" watchObservedRunningTime="2026-01-26 15:35:43.958780038 +0000 UTC m=+119.095797293" Jan 26 15:35:43 crc kubenswrapper[4713]: I0126 15:35:43.982140 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=69.982110111 podStartE2EDuration="1m9.982110111s" podCreationTimestamp="2026-01-26 15:34:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:43.953441787 +0000 UTC m=+119.090459022" watchObservedRunningTime="2026-01-26 15:35:43.982110111 +0000 UTC m=+119.119127346" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.008655 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fgqsv" podStartSLOduration=98.008632274 podStartE2EDuration="1m38.008632274s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:44.007934765 +0000 UTC m=+119.144952000" watchObservedRunningTime="2026-01-26 15:35:44.008632274 +0000 UTC m=+119.145649519" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.020503 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-92r8b" podStartSLOduration=97.020478971 podStartE2EDuration="1m37.020478971s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:44.02010356 +0000 UTC m=+119.157120825" watchObservedRunningTime="2026-01-26 15:35:44.020478971 +0000 UTC m=+119.157496206" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.042091 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.042135 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.042143 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.042158 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.042170 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.144787 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.144838 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.144850 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.144865 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.144876 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.247754 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.247802 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.247842 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.247861 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.247872 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.351404 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.351475 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.351492 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.351518 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.351539 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.455475 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.455565 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.455599 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.455639 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.455662 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.545527 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4ld7b_d21f731c-7a63-4c3c-bdc5-9267197741d4/kube-multus/1.log" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.559561 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.559642 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.559664 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.559694 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.559715 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.663765 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.663844 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.663863 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.663892 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.663915 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.767810 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.767884 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.767905 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.767940 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.767968 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.803326 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:44 crc kubenswrapper[4713]: E0126 15:35:44.803583 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.870261 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.870331 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.870349 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.870413 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.870432 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.924091 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:03:10.323228032 +0000 UTC Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.972139 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.972199 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.972216 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.972241 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4713]: I0126 15:35:44.972257 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.076035 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.076121 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.076145 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.076175 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.076194 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:45Z","lastTransitionTime":"2026-01-26T15:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.078756 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.078840 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.078869 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.078896 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.078912 4713 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:45Z","lastTransitionTime":"2026-01-26T15:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.145688 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh"] Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.146123 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.149873 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.150650 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.150916 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.151516 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.279413 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/38c87baf-3725-45a0-aeba-4a911059102d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n72xh\" (UID: \"38c87baf-3725-45a0-aeba-4a911059102d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.279807 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38c87baf-3725-45a0-aeba-4a911059102d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n72xh\" (UID: \"38c87baf-3725-45a0-aeba-4a911059102d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.279947 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38c87baf-3725-45a0-aeba-4a911059102d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n72xh\" (UID: \"38c87baf-3725-45a0-aeba-4a911059102d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.280105 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38c87baf-3725-45a0-aeba-4a911059102d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n72xh\" (UID: \"38c87baf-3725-45a0-aeba-4a911059102d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.280251 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/38c87baf-3725-45a0-aeba-4a911059102d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n72xh\" (UID: \"38c87baf-3725-45a0-aeba-4a911059102d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.381567 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/38c87baf-3725-45a0-aeba-4a911059102d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n72xh\" (UID: \"38c87baf-3725-45a0-aeba-4a911059102d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.381692 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/38c87baf-3725-45a0-aeba-4a911059102d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n72xh\" (UID: \"38c87baf-3725-45a0-aeba-4a911059102d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.381775 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38c87baf-3725-45a0-aeba-4a911059102d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n72xh\" (UID: \"38c87baf-3725-45a0-aeba-4a911059102d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.381830 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38c87baf-3725-45a0-aeba-4a911059102d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n72xh\" (UID: \"38c87baf-3725-45a0-aeba-4a911059102d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.381918 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38c87baf-3725-45a0-aeba-4a911059102d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n72xh\" (UID: \"38c87baf-3725-45a0-aeba-4a911059102d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.382010 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/38c87baf-3725-45a0-aeba-4a911059102d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n72xh\" (UID: \"38c87baf-3725-45a0-aeba-4a911059102d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.382113 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/38c87baf-3725-45a0-aeba-4a911059102d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n72xh\" (UID: \"38c87baf-3725-45a0-aeba-4a911059102d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.383539 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38c87baf-3725-45a0-aeba-4a911059102d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n72xh\" (UID: \"38c87baf-3725-45a0-aeba-4a911059102d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.391744 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38c87baf-3725-45a0-aeba-4a911059102d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n72xh\" (UID: \"38c87baf-3725-45a0-aeba-4a911059102d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.414301 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38c87baf-3725-45a0-aeba-4a911059102d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n72xh\" (UID: \"38c87baf-3725-45a0-aeba-4a911059102d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.461121 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.551024 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" event={"ID":"38c87baf-3725-45a0-aeba-4a911059102d","Type":"ContainerStarted","Data":"9fefe4662a57acb82cefbcf734ba5cc6ce4f6526d322e5afa44521247e9cbe50"} Jan 26 15:35:45 crc kubenswrapper[4713]: E0126 15:35:45.794324 4713 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.802760 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.802909 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.802804 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:45 crc kubenswrapper[4713]: E0126 15:35:45.804042 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:45 crc kubenswrapper[4713]: E0126 15:35:45.804594 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:45 crc kubenswrapper[4713]: E0126 15:35:45.804925 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.805235 4713 scope.go:117] "RemoveContainer" containerID="fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f" Jan 26 15:35:45 crc kubenswrapper[4713]: E0126 15:35:45.805694 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2drw2_openshift-ovn-kubernetes(4ba2d551-0768-4bac-9af5-bd6e7e58ce8c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" Jan 26 15:35:45 crc kubenswrapper[4713]: E0126 15:35:45.904311 4713 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.925274 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:03:53.666350262 +0000 UTC Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.925425 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 15:35:45 crc kubenswrapper[4713]: I0126 15:35:45.936175 4713 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 15:35:46 crc kubenswrapper[4713]: I0126 15:35:46.557129 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" event={"ID":"38c87baf-3725-45a0-aeba-4a911059102d","Type":"ContainerStarted","Data":"44d6e63acdfe5ceca0b4f150bbed2fa86f25046971b40831f1c0bd1fec09e167"} Jan 26 15:35:46 crc kubenswrapper[4713]: I0126 15:35:46.803399 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:46 crc kubenswrapper[4713]: E0126 15:35:46.803609 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:47 crc kubenswrapper[4713]: I0126 15:35:47.802472 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:47 crc kubenswrapper[4713]: I0126 15:35:47.802537 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:47 crc kubenswrapper[4713]: I0126 15:35:47.802578 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:47 crc kubenswrapper[4713]: E0126 15:35:47.802621 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:47 crc kubenswrapper[4713]: E0126 15:35:47.802757 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:47 crc kubenswrapper[4713]: E0126 15:35:47.802870 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:48 crc kubenswrapper[4713]: I0126 15:35:48.802732 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:48 crc kubenswrapper[4713]: E0126 15:35:48.803028 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:49 crc kubenswrapper[4713]: I0126 15:35:49.803421 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:49 crc kubenswrapper[4713]: I0126 15:35:49.803564 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:49 crc kubenswrapper[4713]: E0126 15:35:49.803604 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:49 crc kubenswrapper[4713]: I0126 15:35:49.803449 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:49 crc kubenswrapper[4713]: E0126 15:35:49.804108 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:49 crc kubenswrapper[4713]: E0126 15:35:49.804251 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:50 crc kubenswrapper[4713]: I0126 15:35:50.802919 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:50 crc kubenswrapper[4713]: E0126 15:35:50.803084 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:50 crc kubenswrapper[4713]: E0126 15:35:50.906243 4713 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:35:51 crc kubenswrapper[4713]: I0126 15:35:51.803312 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:51 crc kubenswrapper[4713]: I0126 15:35:51.803351 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:51 crc kubenswrapper[4713]: E0126 15:35:51.803568 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:51 crc kubenswrapper[4713]: I0126 15:35:51.803614 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:51 crc kubenswrapper[4713]: E0126 15:35:51.803816 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:51 crc kubenswrapper[4713]: E0126 15:35:51.804051 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:52 crc kubenswrapper[4713]: I0126 15:35:52.802668 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:52 crc kubenswrapper[4713]: E0126 15:35:52.802863 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:53 crc kubenswrapper[4713]: I0126 15:35:53.802860 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:53 crc kubenswrapper[4713]: I0126 15:35:53.802946 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:53 crc kubenswrapper[4713]: E0126 15:35:53.803504 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:53 crc kubenswrapper[4713]: I0126 15:35:53.802955 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:53 crc kubenswrapper[4713]: E0126 15:35:53.803629 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:53 crc kubenswrapper[4713]: E0126 15:35:53.803770 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:54 crc kubenswrapper[4713]: I0126 15:35:54.802610 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:54 crc kubenswrapper[4713]: I0126 15:35:54.803548 4713 scope.go:117] "RemoveContainer" containerID="81fef6986044de1cc82fda7f41ffadb687ecdbc3047ddd68f2d4f21ee6698e77" Jan 26 15:35:54 crc kubenswrapper[4713]: E0126 15:35:54.803638 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:54 crc kubenswrapper[4713]: I0126 15:35:54.825636 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n72xh" podStartSLOduration=108.825614796 podStartE2EDuration="1m48.825614796s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:46.577991151 +0000 UTC m=+121.715008436" watchObservedRunningTime="2026-01-26 15:35:54.825614796 +0000 UTC m=+129.962632021" Jan 26 15:35:55 crc kubenswrapper[4713]: I0126 15:35:55.592933 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4ld7b_d21f731c-7a63-4c3c-bdc5-9267197741d4/kube-multus/1.log" Jan 26 15:35:55 crc kubenswrapper[4713]: I0126 15:35:55.592998 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4ld7b" event={"ID":"d21f731c-7a63-4c3c-bdc5-9267197741d4","Type":"ContainerStarted","Data":"c09e4420e3c3da6375408a7e83498526aaae364774050a8fa7364578b9ec8e35"} Jan 26 15:35:55 crc kubenswrapper[4713]: I0126 15:35:55.802492 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:55 crc kubenswrapper[4713]: I0126 15:35:55.803921 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:55 crc kubenswrapper[4713]: E0126 15:35:55.803919 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:55 crc kubenswrapper[4713]: I0126 15:35:55.803959 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:55 crc kubenswrapper[4713]: E0126 15:35:55.804023 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:55 crc kubenswrapper[4713]: E0126 15:35:55.804250 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:55 crc kubenswrapper[4713]: E0126 15:35:55.907833 4713 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:35:56 crc kubenswrapper[4713]: I0126 15:35:56.803318 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:56 crc kubenswrapper[4713]: E0126 15:35:56.803498 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:56 crc kubenswrapper[4713]: I0126 15:35:56.804236 4713 scope.go:117] "RemoveContainer" containerID="fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f" Jan 26 15:35:57 crc kubenswrapper[4713]: I0126 15:35:57.601799 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/3.log" Jan 26 15:35:57 crc kubenswrapper[4713]: I0126 15:35:57.605112 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerStarted","Data":"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10"} Jan 26 15:35:57 crc kubenswrapper[4713]: I0126 15:35:57.605738 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:35:57 crc kubenswrapper[4713]: I0126 15:35:57.649462 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4ld7b" podStartSLOduration=111.649436545 podStartE2EDuration="1m51.649436545s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:55.618612803 +0000 UTC m=+130.755630038" watchObservedRunningTime="2026-01-26 15:35:57.649436545 +0000 UTC m=+132.786453800" Jan 26 15:35:57 crc kubenswrapper[4713]: I0126 15:35:57.649944 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podStartSLOduration=111.649936309 podStartE2EDuration="1m51.649936309s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:57.647453438 +0000 UTC m=+132.784470693" watchObservedRunningTime="2026-01-26 15:35:57.649936309 +0000 UTC m=+132.786953564" Jan 26 15:35:57 crc kubenswrapper[4713]: I0126 15:35:57.698600 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4vgps"] Jan 26 15:35:57 crc kubenswrapper[4713]: I0126 15:35:57.698747 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:57 crc kubenswrapper[4713]: E0126 15:35:57.698859 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:57 crc kubenswrapper[4713]: I0126 15:35:57.803248 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:57 crc kubenswrapper[4713]: E0126 15:35:57.803399 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:57 crc kubenswrapper[4713]: I0126 15:35:57.803481 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:57 crc kubenswrapper[4713]: I0126 15:35:57.803571 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:57 crc kubenswrapper[4713]: E0126 15:35:57.803689 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:57 crc kubenswrapper[4713]: E0126 15:35:57.803753 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:59 crc kubenswrapper[4713]: I0126 15:35:59.802814 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:59 crc kubenswrapper[4713]: I0126 15:35:59.802885 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:35:59 crc kubenswrapper[4713]: I0126 15:35:59.802925 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:59 crc kubenswrapper[4713]: I0126 15:35:59.802815 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:59 crc kubenswrapper[4713]: E0126 15:35:59.803130 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:59 crc kubenswrapper[4713]: E0126 15:35:59.803198 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vgps" podUID="6f185439-f527-44bf-8362-a9cf40e00d3c" Jan 26 15:35:59 crc kubenswrapper[4713]: E0126 15:35:59.802972 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:59 crc kubenswrapper[4713]: E0126 15:35:59.803266 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:36:01 crc kubenswrapper[4713]: I0126 15:36:01.803201 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:01 crc kubenswrapper[4713]: I0126 15:36:01.803339 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:01 crc kubenswrapper[4713]: I0126 15:36:01.803522 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:01 crc kubenswrapper[4713]: I0126 15:36:01.804293 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:36:01 crc kubenswrapper[4713]: I0126 15:36:01.807009 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 15:36:01 crc kubenswrapper[4713]: I0126 15:36:01.807076 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 15:36:01 crc kubenswrapper[4713]: I0126 15:36:01.807167 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 15:36:01 crc kubenswrapper[4713]: I0126 15:36:01.807841 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 15:36:01 crc kubenswrapper[4713]: I0126 15:36:01.810169 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 15:36:01 crc kubenswrapper[4713]: I0126 15:36:01.810213 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.575139 4713 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.628925 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rfpbx"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.630112 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ss5h8"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.630915 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.631729 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.638760 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.641932 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hxmkn"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.642498 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.643186 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.643335 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.643422 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.643539 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.643857 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.644533 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.645332 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.645760 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.645781 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.646004 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.646193 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.646490 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.646755 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.646864 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.646870 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.648149 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j6h8x"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.653705 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.654450 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.655203 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.656728 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.657507 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.658330 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.658881 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.673162 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.673351 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.675810 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.675933 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.677579 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.677883 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.678104 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.678737 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7fwh2"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.678941 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.679280 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.679755 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wsl5j"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.679935 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.680271 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.680427 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wsl5j" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.682391 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.682464 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.683042 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.682446 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.683447 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.683485 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.684109 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.693781 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.694214 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.694396 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.694536 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.694689 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-p5wsk"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.695152 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lsc7z"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.695542 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.695988 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.696466 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f465s"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.696651 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.697288 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.697846 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.697407 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.699030 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.699215 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.704356 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dnw7f"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.705056 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.705397 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.705701 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-55x6b"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.705789 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dnw7f" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.706459 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-55x6b" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.706603 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.706873 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.706967 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.707001 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.707127 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.707206 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.707241 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.706974 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.707345 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.707474 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.709973 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.710323 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.711221 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.711590 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.712041 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.712250 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.715928 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.733315 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.734224 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kszgv"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.734679 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rcgql"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.735211 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.735496 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.735677 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.737633 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.738124 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.738525 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.738877 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.773238 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2rd4s"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.773763 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2rd4s" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.774064 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.775716 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-lxzxj"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.780946 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.782384 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.782512 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.782657 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.782846 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.783893 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.784015 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.784107 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.784293 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.784419 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.784489 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.784669 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.784706 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.784828 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.784888 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.784964 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.785082 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.785128 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.785211 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.785297 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.785088 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.785438 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.785473 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.785577 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.785952 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.786203 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hncj\" (UniqueName: \"kubernetes.io/projected/738df9e4-f531-420c-a4d6-2f3091d86068-kube-api-access-2hncj\") pod \"machine-config-operator-74547568cd-kwb58\" (UID: \"738df9e4-f531-420c-a4d6-2f3091d86068\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.786316 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-oauth-config\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.786349 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhwbk\" (UniqueName: \"kubernetes.io/projected/9b229eeb-448b-4abe-9ba0-fe7dfc6e589e-kube-api-access-dhwbk\") pod \"downloads-7954f5f757-55x6b\" (UID: \"9b229eeb-448b-4abe-9ba0-fe7dfc6e589e\") " pod="openshift-console/downloads-7954f5f757-55x6b" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.786411 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/619d9117-d7de-4088-a239-bcf1b3560380-trusted-ca\") pod \"ingress-operator-5b745b69d9-mksjz\" (UID: \"619d9117-d7de-4088-a239-bcf1b3560380\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.786436 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/738df9e4-f531-420c-a4d6-2f3091d86068-images\") pod \"machine-config-operator-74547568cd-kwb58\" (UID: \"738df9e4-f531-420c-a4d6-2f3091d86068\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.786703 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-oauth-serving-cert\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.786754 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/619d9117-d7de-4088-a239-bcf1b3560380-metrics-tls\") pod \"ingress-operator-5b745b69d9-mksjz\" (UID: \"619d9117-d7de-4088-a239-bcf1b3560380\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.786791 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mjc8\" (UniqueName: \"kubernetes.io/projected/619d9117-d7de-4088-a239-bcf1b3560380-kube-api-access-4mjc8\") pod \"ingress-operator-5b745b69d9-mksjz\" (UID: \"619d9117-d7de-4088-a239-bcf1b3560380\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.786818 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3f2a4f-1918-41bd-b81e-662f947d63d3-etcd-client\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.786846 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-trusted-ca-bundle\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.786868 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8814b03e-4835-4e4b-863b-acb4a7473f54-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7fwh2\" (UID: \"8814b03e-4835-4e4b-863b-acb4a7473f54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.786893 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp5nq\" (UniqueName: \"kubernetes.io/projected/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-kube-api-access-vp5nq\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.786927 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8814b03e-4835-4e4b-863b-acb4a7473f54-config\") pod \"authentication-operator-69f744f599-7fwh2\" (UID: \"8814b03e-4835-4e4b-863b-acb4a7473f54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787084 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a8716a7-082f-463c-9a07-da822550f992-config\") pod \"openshift-apiserver-operator-796bbdcf4f-sbskr\" (UID: \"9a8716a7-082f-463c-9a07-da822550f992\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787163 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wqxr\" (UniqueName: \"kubernetes.io/projected/9a8716a7-082f-463c-9a07-da822550f992-kube-api-access-8wqxr\") pod \"openshift-apiserver-operator-796bbdcf4f-sbskr\" (UID: \"9a8716a7-082f-463c-9a07-da822550f992\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787216 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3f2a4f-1918-41bd-b81e-662f947d63d3-etcd-ca\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787265 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3f2a4f-1918-41bd-b81e-662f947d63d3-config\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787290 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-config\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787432 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-service-ca\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787475 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq8qk\" (UniqueName: \"kubernetes.io/projected/6b3f2a4f-1918-41bd-b81e-662f947d63d3-kube-api-access-cq8qk\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787514 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/619d9117-d7de-4088-a239-bcf1b3560380-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mksjz\" (UID: \"619d9117-d7de-4088-a239-bcf1b3560380\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787568 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-serving-cert\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787619 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a8716a7-082f-463c-9a07-da822550f992-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-sbskr\" (UID: \"9a8716a7-082f-463c-9a07-da822550f992\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787649 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3f2a4f-1918-41bd-b81e-662f947d63d3-serving-cert\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787672 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/738df9e4-f531-420c-a4d6-2f3091d86068-proxy-tls\") pod \"machine-config-operator-74547568cd-kwb58\" (UID: \"738df9e4-f531-420c-a4d6-2f3091d86068\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787717 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8814b03e-4835-4e4b-863b-acb4a7473f54-service-ca-bundle\") pod \"authentication-operator-69f744f599-7fwh2\" (UID: \"8814b03e-4835-4e4b-863b-acb4a7473f54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787747 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8814b03e-4835-4e4b-863b-acb4a7473f54-serving-cert\") pod \"authentication-operator-69f744f599-7fwh2\" (UID: \"8814b03e-4835-4e4b-863b-acb4a7473f54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787774 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7npd\" (UniqueName: \"kubernetes.io/projected/8814b03e-4835-4e4b-863b-acb4a7473f54-kube-api-access-f7npd\") pod \"authentication-operator-69f744f599-7fwh2\" (UID: \"8814b03e-4835-4e4b-863b-acb4a7473f54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787805 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/738df9e4-f531-420c-a4d6-2f3091d86068-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kwb58\" (UID: \"738df9e4-f531-420c-a4d6-2f3091d86068\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.787837 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3f2a4f-1918-41bd-b81e-662f947d63d3-etcd-service-ca\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.789768 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n4756"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.790667 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.790875 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.791017 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n4756" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.791859 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-p5wsk"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.792003 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wsl5j"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.792098 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hxmkn"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.792230 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.792451 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.793663 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f465s"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.802464 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j6h8x"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.823777 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.824740 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.824975 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.825020 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rfpbx"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.824997 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.825769 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.825922 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.826035 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.826140 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.826281 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.826399 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.826494 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.826452 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.826679 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.827421 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lsc7z"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.828884 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.829659 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.830613 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n4756"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.831904 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7fwh2"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.833503 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.833622 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.833896 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.834165 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.834345 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.834576 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.834726 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.835249 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.836085 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.836891 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.837463 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.838404 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.838802 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.838977 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.833542 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.839423 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-55x6b"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.840142 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.840254 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.855162 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.855596 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.857094 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.874242 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.875989 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.876546 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dnw7f"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.880137 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.882453 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-574q9"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.884640 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-574q9" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.889813 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/05b2187e-ae3f-460d-8bd1-0d950c1e0535-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f465s\" (UID: \"05b2187e-ae3f-460d-8bd1-0d950c1e0535\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.889892 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlql7\" (UniqueName: \"kubernetes.io/projected/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-kube-api-access-xlql7\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.889936 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3f2a4f-1918-41bd-b81e-662f947d63d3-etcd-service-ca\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.890043 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hlbl\" (UniqueName: \"kubernetes.io/projected/ab00e40d-3300-4351-89df-203b1bf11d72-kube-api-access-6hlbl\") pod \"console-operator-58897d9998-hxmkn\" (UID: \"ab00e40d-3300-4351-89df-203b1bf11d72\") " pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.890095 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10aa23f-eb67-42dc-84b9-9489eeac389e-config\") pod \"machine-approver-56656f9798-qb8bs\" (UID: \"d10aa23f-eb67-42dc-84b9-9489eeac389e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.890114 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d10aa23f-eb67-42dc-84b9-9489eeac389e-machine-approver-tls\") pod \"machine-approver-56656f9798-qb8bs\" (UID: \"d10aa23f-eb67-42dc-84b9-9489eeac389e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.890168 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c219134-328d-4145-8dd2-3f01df03a055-config\") pod \"machine-api-operator-5694c8668f-ss5h8\" (UID: \"9c219134-328d-4145-8dd2-3f01df03a055\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.896809 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971b502e-8b71-404b-a7ca-58aa1894c648-config\") pod \"route-controller-manager-6576b87f9c-8r7k5\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.896987 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9c219134-328d-4145-8dd2-3f01df03a055-images\") pod \"machine-api-operator-5694c8668f-ss5h8\" (UID: \"9c219134-328d-4145-8dd2-3f01df03a055\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.897070 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab3b4047-952e-4f97-afb6-b7418db3519d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wsl5j\" (UID: \"ab3b4047-952e-4f97-afb6-b7418db3519d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wsl5j" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.897116 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6eb2408a-c785-4784-9f65-a2fe7d218903-etcd-client\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.897460 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.909318 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.909427 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/971b502e-8b71-404b-a7ca-58aa1894c648-serving-cert\") pod \"route-controller-manager-6576b87f9c-8r7k5\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.909924 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hncj\" (UniqueName: \"kubernetes.io/projected/738df9e4-f531-420c-a4d6-2f3091d86068-kube-api-access-2hncj\") pod \"machine-config-operator-74547568cd-kwb58\" (UID: \"738df9e4-f531-420c-a4d6-2f3091d86068\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.910000 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05b2187e-ae3f-460d-8bd1-0d950c1e0535-serving-cert\") pod \"openshift-config-operator-7777fb866f-f465s\" (UID: \"05b2187e-ae3f-460d-8bd1-0d950c1e0535\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.910076 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6eb2408a-c785-4784-9f65-a2fe7d218903-encryption-config\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.910148 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-oauth-config\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.910221 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhwbk\" (UniqueName: \"kubernetes.io/projected/9b229eeb-448b-4abe-9ba0-fe7dfc6e589e-kube-api-access-dhwbk\") pod \"downloads-7954f5f757-55x6b\" (UID: \"9b229eeb-448b-4abe-9ba0-fe7dfc6e589e\") " pod="openshift-console/downloads-7954f5f757-55x6b" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.910291 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/619d9117-d7de-4088-a239-bcf1b3560380-trusted-ca\") pod \"ingress-operator-5b745b69d9-mksjz\" (UID: \"619d9117-d7de-4088-a239-bcf1b3560380\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.910380 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/738df9e4-f531-420c-a4d6-2f3091d86068-images\") pod \"machine-config-operator-74547568cd-kwb58\" (UID: \"738df9e4-f531-420c-a4d6-2f3091d86068\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.910463 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6eb2408a-c785-4784-9f65-a2fe7d218903-image-import-ca\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.910533 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-etcd-client\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.910614 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.910713 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab00e40d-3300-4351-89df-203b1bf11d72-serving-cert\") pod \"console-operator-58897d9998-hxmkn\" (UID: \"ab00e40d-3300-4351-89df-203b1bf11d72\") " pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.910792 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/619d9117-d7de-4088-a239-bcf1b3560380-metrics-tls\") pod \"ingress-operator-5b745b69d9-mksjz\" (UID: \"619d9117-d7de-4088-a239-bcf1b3560380\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.910869 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b058206c-e1d2-41d2-ae2f-c428ad49eea4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bszqz\" (UID: \"b058206c-e1d2-41d2-ae2f-c428ad49eea4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.910964 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-oauth-serving-cert\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.911038 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mjc8\" (UniqueName: \"kubernetes.io/projected/619d9117-d7de-4088-a239-bcf1b3560380-kube-api-access-4mjc8\") pod \"ingress-operator-5b745b69d9-mksjz\" (UID: \"619d9117-d7de-4088-a239-bcf1b3560380\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.911109 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7kb2\" (UniqueName: \"kubernetes.io/projected/9c219134-328d-4145-8dd2-3f01df03a055-kube-api-access-h7kb2\") pod \"machine-api-operator-5694c8668f-ss5h8\" (UID: \"9c219134-328d-4145-8dd2-3f01df03a055\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.911180 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6eb2408a-c785-4784-9f65-a2fe7d218903-etcd-serving-ca\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.911248 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3f2a4f-1918-41bd-b81e-662f947d63d3-etcd-client\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.911320 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-encryption-config\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.911414 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ee95fd-840e-47fe-8c69-aeef8cef6e80-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2tsvd\" (UID: \"68ee95fd-840e-47fe-8c69-aeef8cef6e80\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.911492 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-trusted-ca-bundle\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.911560 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp5nq\" (UniqueName: \"kubernetes.io/projected/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-kube-api-access-vp5nq\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.911639 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8814b03e-4835-4e4b-863b-acb4a7473f54-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7fwh2\" (UID: \"8814b03e-4835-4e4b-863b-acb4a7473f54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.911996 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdcdv\" (UniqueName: \"kubernetes.io/projected/d10aa23f-eb67-42dc-84b9-9489eeac389e-kube-api-access-bdcdv\") pod \"machine-approver-56656f9798-qb8bs\" (UID: \"d10aa23f-eb67-42dc-84b9-9489eeac389e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.912090 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8n8s\" (UniqueName: \"kubernetes.io/projected/b058206c-e1d2-41d2-ae2f-c428ad49eea4-kube-api-access-z8n8s\") pod \"machine-config-controller-84d6567774-bszqz\" (UID: \"b058206c-e1d2-41d2-ae2f-c428ad49eea4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.912178 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6eb2408a-c785-4784-9f65-a2fe7d218903-audit\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.912293 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8814b03e-4835-4e4b-863b-acb4a7473f54-config\") pod \"authentication-operator-69f744f599-7fwh2\" (UID: \"8814b03e-4835-4e4b-863b-acb4a7473f54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.912406 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c219134-328d-4145-8dd2-3f01df03a055-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ss5h8\" (UID: \"9c219134-328d-4145-8dd2-3f01df03a055\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.912477 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb2408a-c785-4784-9f65-a2fe7d218903-config\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.912541 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.912606 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-227r2\" (UniqueName: \"kubernetes.io/projected/99621db9-a20f-42b1-a788-a65ad55b6a52-kube-api-access-227r2\") pod \"controller-manager-879f6c89f-rcgql\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.912710 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a8716a7-082f-463c-9a07-da822550f992-config\") pod \"openshift-apiserver-operator-796bbdcf4f-sbskr\" (UID: \"9a8716a7-082f-463c-9a07-da822550f992\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.912836 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wqxr\" (UniqueName: \"kubernetes.io/projected/9a8716a7-082f-463c-9a07-da822550f992-kube-api-access-8wqxr\") pod \"openshift-apiserver-operator-796bbdcf4f-sbskr\" (UID: \"9a8716a7-082f-463c-9a07-da822550f992\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.912925 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6eb2408a-c785-4784-9f65-a2fe7d218903-audit-dir\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.913010 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-serving-cert\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.913100 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3f2a4f-1918-41bd-b81e-662f947d63d3-etcd-ca\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.913212 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eb2408a-c785-4784-9f65-a2fe7d218903-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.913335 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3f2a4f-1918-41bd-b81e-662f947d63d3-config\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.913454 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab00e40d-3300-4351-89df-203b1bf11d72-config\") pod \"console-operator-58897d9998-hxmkn\" (UID: \"ab00e40d-3300-4351-89df-203b1bf11d72\") " pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.913555 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj7gf\" (UniqueName: \"kubernetes.io/projected/68ee95fd-840e-47fe-8c69-aeef8cef6e80-kube-api-access-cj7gf\") pod \"openshift-controller-manager-operator-756b6f6bc6-2tsvd\" (UID: \"68ee95fd-840e-47fe-8c69-aeef8cef6e80\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.913660 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-config\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.913781 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-service-ca\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.913871 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/971b502e-8b71-404b-a7ca-58aa1894c648-client-ca\") pod \"route-controller-manager-6576b87f9c-8r7k5\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.913979 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/619d9117-d7de-4088-a239-bcf1b3560380-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mksjz\" (UID: \"619d9117-d7de-4088-a239-bcf1b3560380\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.914079 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq8qk\" (UniqueName: \"kubernetes.io/projected/6b3f2a4f-1918-41bd-b81e-662f947d63d3-kube-api-access-cq8qk\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.914177 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-audit-policies\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.914254 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-client-ca\") pod \"controller-manager-879f6c89f-rcgql\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.914326 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68ee95fd-840e-47fe-8c69-aeef8cef6e80-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2tsvd\" (UID: \"68ee95fd-840e-47fe-8c69-aeef8cef6e80\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.914416 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-serving-cert\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.914485 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bgnp\" (UniqueName: \"kubernetes.io/projected/05b2187e-ae3f-460d-8bd1-0d950c1e0535-kube-api-access-9bgnp\") pod \"openshift-config-operator-7777fb866f-f465s\" (UID: \"05b2187e-ae3f-460d-8bd1-0d950c1e0535\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.914560 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d10aa23f-eb67-42dc-84b9-9489eeac389e-auth-proxy-config\") pod \"machine-approver-56656f9798-qb8bs\" (UID: \"d10aa23f-eb67-42dc-84b9-9489eeac389e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.914635 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b058206c-e1d2-41d2-ae2f-c428ad49eea4-proxy-tls\") pod \"machine-config-controller-84d6567774-bszqz\" (UID: \"b058206c-e1d2-41d2-ae2f-c428ad49eea4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.914701 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6eb2408a-c785-4784-9f65-a2fe7d218903-node-pullsecrets\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.914773 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-config\") pod \"controller-manager-879f6c89f-rcgql\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.914841 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99621db9-a20f-42b1-a788-a65ad55b6a52-serving-cert\") pod \"controller-manager-879f6c89f-rcgql\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.914919 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a8716a7-082f-463c-9a07-da822550f992-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-sbskr\" (UID: \"9a8716a7-082f-463c-9a07-da822550f992\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.915017 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab00e40d-3300-4351-89df-203b1bf11d72-trusted-ca\") pod \"console-operator-58897d9998-hxmkn\" (UID: \"ab00e40d-3300-4351-89df-203b1bf11d72\") " pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.915090 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm9n9\" (UniqueName: \"kubernetes.io/projected/6eb2408a-c785-4784-9f65-a2fe7d218903-kube-api-access-cm9n9\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.915155 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rcgql\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.915222 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb2408a-c785-4784-9f65-a2fe7d218903-serving-cert\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.915296 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3f2a4f-1918-41bd-b81e-662f947d63d3-serving-cert\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.915413 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/738df9e4-f531-420c-a4d6-2f3091d86068-proxy-tls\") pod \"machine-config-operator-74547568cd-kwb58\" (UID: \"738df9e4-f531-420c-a4d6-2f3091d86068\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.915493 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-audit-dir\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.915579 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f6ln\" (UniqueName: \"kubernetes.io/projected/971b502e-8b71-404b-a7ca-58aa1894c648-kube-api-access-4f6ln\") pod \"route-controller-manager-6576b87f9c-8r7k5\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.922598 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8814b03e-4835-4e4b-863b-acb4a7473f54-service-ca-bundle\") pod \"authentication-operator-69f744f599-7fwh2\" (UID: \"8814b03e-4835-4e4b-863b-acb4a7473f54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.922769 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nc64\" (UniqueName: \"kubernetes.io/projected/ab3b4047-952e-4f97-afb6-b7418db3519d-kube-api-access-6nc64\") pod \"cluster-samples-operator-665b6dd947-wsl5j\" (UID: \"ab3b4047-952e-4f97-afb6-b7418db3519d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wsl5j" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.922883 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/738df9e4-f531-420c-a4d6-2f3091d86068-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kwb58\" (UID: \"738df9e4-f531-420c-a4d6-2f3091d86068\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.922990 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8814b03e-4835-4e4b-863b-acb4a7473f54-serving-cert\") pod \"authentication-operator-69f744f599-7fwh2\" (UID: \"8814b03e-4835-4e4b-863b-acb4a7473f54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.923087 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7npd\" (UniqueName: \"kubernetes.io/projected/8814b03e-4835-4e4b-863b-acb4a7473f54-kube-api-access-f7npd\") pod \"authentication-operator-69f744f599-7fwh2\" (UID: \"8814b03e-4835-4e4b-863b-acb4a7473f54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.915597 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/738df9e4-f531-420c-a4d6-2f3091d86068-images\") pod \"machine-config-operator-74547568cd-kwb58\" (UID: \"738df9e4-f531-420c-a4d6-2f3091d86068\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.924260 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8814b03e-4835-4e4b-863b-acb4a7473f54-service-ca-bundle\") pod \"authentication-operator-69f744f599-7fwh2\" (UID: \"8814b03e-4835-4e4b-863b-acb4a7473f54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.916850 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8814b03e-4835-4e4b-863b-acb4a7473f54-config\") pod \"authentication-operator-69f744f599-7fwh2\" (UID: \"8814b03e-4835-4e4b-863b-acb4a7473f54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.914707 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6h7k9"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.925800 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.926091 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8814b03e-4835-4e4b-863b-acb4a7473f54-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7fwh2\" (UID: \"8814b03e-4835-4e4b-863b-acb4a7473f54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.926498 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.927628 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.927957 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6h7k9" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.920032 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a8716a7-082f-463c-9a07-da822550f992-config\") pod \"openshift-apiserver-operator-796bbdcf4f-sbskr\" (UID: \"9a8716a7-082f-463c-9a07-da822550f992\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.928394 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.913932 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.917642 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.930886 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/738df9e4-f531-420c-a4d6-2f3091d86068-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kwb58\" (UID: \"738df9e4-f531-420c-a4d6-2f3091d86068\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.934382 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.934826 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.935671 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-oauth-serving-cert\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.939077 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.941235 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a8716a7-082f-463c-9a07-da822550f992-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-sbskr\" (UID: \"9a8716a7-082f-463c-9a07-da822550f992\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.942761 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-config\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.943843 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-trusted-ca-bundle\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.947222 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-service-ca\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.947528 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8814b03e-4835-4e4b-863b-acb4a7473f54-serving-cert\") pod \"authentication-operator-69f744f599-7fwh2\" (UID: \"8814b03e-4835-4e4b-863b-acb4a7473f54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.948126 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-serving-cert\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.955068 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.957980 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-oauth-config\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.960858 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9hkc7"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.962031 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9hkc7" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.966500 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rcgql"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.968407 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-z58gc"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.970260 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/738df9e4-f531-420c-a4d6-2f3091d86068-proxy-tls\") pod \"machine-config-operator-74547568cd-kwb58\" (UID: \"738df9e4-f531-420c-a4d6-2f3091d86068\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.970473 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2rd4s"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.970657 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z58gc" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.971504 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.972873 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6vntj"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.973025 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.974068 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.974663 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.974686 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.974766 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.975527 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.976581 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-l6pr5"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.978544 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ss5h8"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.978578 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.980611 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kszgv"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.980904 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.981541 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-l6pr5" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.981901 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.982879 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-clnp7"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.983764 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-clnp7" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.984078 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.985259 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.986757 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.989122 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6h7k9"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.989569 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.989675 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-574q9"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.990933 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.991959 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-z58gc"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.993494 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.993685 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.994538 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-l6pr5"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.995969 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9c65z"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.996997 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9c65z" Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.997253 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9c65z"] Jan 26 15:36:05 crc kubenswrapper[4713]: I0126 15:36:05.999500 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9hkc7"] Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.001964 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6vntj"] Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.014121 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026145 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hlbl\" (UniqueName: \"kubernetes.io/projected/ab00e40d-3300-4351-89df-203b1bf11d72-kube-api-access-6hlbl\") pod \"console-operator-58897d9998-hxmkn\" (UID: \"ab00e40d-3300-4351-89df-203b1bf11d72\") " pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026228 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/05b2187e-ae3f-460d-8bd1-0d950c1e0535-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f465s\" (UID: \"05b2187e-ae3f-460d-8bd1-0d950c1e0535\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026260 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlql7\" (UniqueName: \"kubernetes.io/projected/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-kube-api-access-xlql7\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026300 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10aa23f-eb67-42dc-84b9-9489eeac389e-config\") pod \"machine-approver-56656f9798-qb8bs\" (UID: \"d10aa23f-eb67-42dc-84b9-9489eeac389e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026328 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d10aa23f-eb67-42dc-84b9-9489eeac389e-machine-approver-tls\") pod \"machine-approver-56656f9798-qb8bs\" (UID: \"d10aa23f-eb67-42dc-84b9-9489eeac389e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026355 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c219134-328d-4145-8dd2-3f01df03a055-config\") pod \"machine-api-operator-5694c8668f-ss5h8\" (UID: \"9c219134-328d-4145-8dd2-3f01df03a055\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026420 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971b502e-8b71-404b-a7ca-58aa1894c648-config\") pod \"route-controller-manager-6576b87f9c-8r7k5\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026451 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/971b502e-8b71-404b-a7ca-58aa1894c648-serving-cert\") pod \"route-controller-manager-6576b87f9c-8r7k5\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026481 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9c219134-328d-4145-8dd2-3f01df03a055-images\") pod \"machine-api-operator-5694c8668f-ss5h8\" (UID: \"9c219134-328d-4145-8dd2-3f01df03a055\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026509 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab3b4047-952e-4f97-afb6-b7418db3519d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wsl5j\" (UID: \"ab3b4047-952e-4f97-afb6-b7418db3519d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wsl5j" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026533 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6eb2408a-c785-4784-9f65-a2fe7d218903-etcd-client\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026556 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05b2187e-ae3f-460d-8bd1-0d950c1e0535-serving-cert\") pod \"openshift-config-operator-7777fb866f-f465s\" (UID: \"05b2187e-ae3f-460d-8bd1-0d950c1e0535\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026581 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6eb2408a-c785-4784-9f65-a2fe7d218903-encryption-config\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026646 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026678 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6eb2408a-c785-4784-9f65-a2fe7d218903-image-import-ca\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026685 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/05b2187e-ae3f-460d-8bd1-0d950c1e0535-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f465s\" (UID: \"05b2187e-ae3f-460d-8bd1-0d950c1e0535\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026699 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-etcd-client\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026783 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab00e40d-3300-4351-89df-203b1bf11d72-serving-cert\") pod \"console-operator-58897d9998-hxmkn\" (UID: \"ab00e40d-3300-4351-89df-203b1bf11d72\") " pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026815 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b058206c-e1d2-41d2-ae2f-c428ad49eea4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bszqz\" (UID: \"b058206c-e1d2-41d2-ae2f-c428ad49eea4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026848 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6eb2408a-c785-4784-9f65-a2fe7d218903-etcd-serving-ca\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026874 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10aa23f-eb67-42dc-84b9-9489eeac389e-config\") pod \"machine-approver-56656f9798-qb8bs\" (UID: \"d10aa23f-eb67-42dc-84b9-9489eeac389e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026877 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7kb2\" (UniqueName: \"kubernetes.io/projected/9c219134-328d-4145-8dd2-3f01df03a055-kube-api-access-h7kb2\") pod \"machine-api-operator-5694c8668f-ss5h8\" (UID: \"9c219134-328d-4145-8dd2-3f01df03a055\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026931 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-encryption-config\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026953 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ee95fd-840e-47fe-8c69-aeef8cef6e80-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2tsvd\" (UID: \"68ee95fd-840e-47fe-8c69-aeef8cef6e80\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.026990 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdcdv\" (UniqueName: \"kubernetes.io/projected/d10aa23f-eb67-42dc-84b9-9489eeac389e-kube-api-access-bdcdv\") pod \"machine-approver-56656f9798-qb8bs\" (UID: \"d10aa23f-eb67-42dc-84b9-9489eeac389e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027012 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8n8s\" (UniqueName: \"kubernetes.io/projected/b058206c-e1d2-41d2-ae2f-c428ad49eea4-kube-api-access-z8n8s\") pod \"machine-config-controller-84d6567774-bszqz\" (UID: \"b058206c-e1d2-41d2-ae2f-c428ad49eea4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027037 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6eb2408a-c785-4784-9f65-a2fe7d218903-audit\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027080 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c219134-328d-4145-8dd2-3f01df03a055-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ss5h8\" (UID: \"9c219134-328d-4145-8dd2-3f01df03a055\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027100 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb2408a-c785-4784-9f65-a2fe7d218903-config\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027122 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027160 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-227r2\" (UniqueName: \"kubernetes.io/projected/99621db9-a20f-42b1-a788-a65ad55b6a52-kube-api-access-227r2\") pod \"controller-manager-879f6c89f-rcgql\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027202 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6eb2408a-c785-4784-9f65-a2fe7d218903-audit-dir\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027221 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-serving-cert\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027264 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eb2408a-c785-4784-9f65-a2fe7d218903-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027295 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab00e40d-3300-4351-89df-203b1bf11d72-config\") pod \"console-operator-58897d9998-hxmkn\" (UID: \"ab00e40d-3300-4351-89df-203b1bf11d72\") " pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027317 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj7gf\" (UniqueName: \"kubernetes.io/projected/68ee95fd-840e-47fe-8c69-aeef8cef6e80-kube-api-access-cj7gf\") pod \"openshift-controller-manager-operator-756b6f6bc6-2tsvd\" (UID: \"68ee95fd-840e-47fe-8c69-aeef8cef6e80\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027384 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/971b502e-8b71-404b-a7ca-58aa1894c648-client-ca\") pod \"route-controller-manager-6576b87f9c-8r7k5\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027416 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-client-ca\") pod \"controller-manager-879f6c89f-rcgql\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027460 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-audit-policies\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027494 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68ee95fd-840e-47fe-8c69-aeef8cef6e80-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2tsvd\" (UID: \"68ee95fd-840e-47fe-8c69-aeef8cef6e80\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027525 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bgnp\" (UniqueName: \"kubernetes.io/projected/05b2187e-ae3f-460d-8bd1-0d950c1e0535-kube-api-access-9bgnp\") pod \"openshift-config-operator-7777fb866f-f465s\" (UID: \"05b2187e-ae3f-460d-8bd1-0d950c1e0535\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027549 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d10aa23f-eb67-42dc-84b9-9489eeac389e-auth-proxy-config\") pod \"machine-approver-56656f9798-qb8bs\" (UID: \"d10aa23f-eb67-42dc-84b9-9489eeac389e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027755 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6eb2408a-c785-4784-9f65-a2fe7d218903-etcd-serving-ca\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027766 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99621db9-a20f-42b1-a788-a65ad55b6a52-serving-cert\") pod \"controller-manager-879f6c89f-rcgql\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027807 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b058206c-e1d2-41d2-ae2f-c428ad49eea4-proxy-tls\") pod \"machine-config-controller-84d6567774-bszqz\" (UID: \"b058206c-e1d2-41d2-ae2f-c428ad49eea4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027832 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6eb2408a-c785-4784-9f65-a2fe7d218903-node-pullsecrets\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027850 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-config\") pod \"controller-manager-879f6c89f-rcgql\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027871 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rcgql\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027892 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab00e40d-3300-4351-89df-203b1bf11d72-trusted-ca\") pod \"console-operator-58897d9998-hxmkn\" (UID: \"ab00e40d-3300-4351-89df-203b1bf11d72\") " pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027913 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm9n9\" (UniqueName: \"kubernetes.io/projected/6eb2408a-c785-4784-9f65-a2fe7d218903-kube-api-access-cm9n9\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027966 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb2408a-c785-4784-9f65-a2fe7d218903-serving-cert\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.027997 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nc64\" (UniqueName: \"kubernetes.io/projected/ab3b4047-952e-4f97-afb6-b7418db3519d-kube-api-access-6nc64\") pod \"cluster-samples-operator-665b6dd947-wsl5j\" (UID: \"ab3b4047-952e-4f97-afb6-b7418db3519d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wsl5j" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.028025 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-audit-dir\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.028051 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f6ln\" (UniqueName: \"kubernetes.io/projected/971b502e-8b71-404b-a7ca-58aa1894c648-kube-api-access-4f6ln\") pod \"route-controller-manager-6576b87f9c-8r7k5\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.029800 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b058206c-e1d2-41d2-ae2f-c428ad49eea4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bszqz\" (UID: \"b058206c-e1d2-41d2-ae2f-c428ad49eea4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.030757 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab00e40d-3300-4351-89df-203b1bf11d72-serving-cert\") pod \"console-operator-58897d9998-hxmkn\" (UID: \"ab00e40d-3300-4351-89df-203b1bf11d72\") " pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.030806 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6eb2408a-c785-4784-9f65-a2fe7d218903-node-pullsecrets\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.030823 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-audit-dir\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.030883 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-encryption-config\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.031518 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.032021 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eb2408a-c785-4784-9f65-a2fe7d218903-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.032321 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab00e40d-3300-4351-89df-203b1bf11d72-trusted-ca\") pod \"console-operator-58897d9998-hxmkn\" (UID: \"ab00e40d-3300-4351-89df-203b1bf11d72\") " pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.033339 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb2408a-c785-4784-9f65-a2fe7d218903-serving-cert\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.034311 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05b2187e-ae3f-460d-8bd1-0d950c1e0535-serving-cert\") pod \"openshift-config-operator-7777fb866f-f465s\" (UID: \"05b2187e-ae3f-460d-8bd1-0d950c1e0535\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.034470 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ee95fd-840e-47fe-8c69-aeef8cef6e80-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2tsvd\" (UID: \"68ee95fd-840e-47fe-8c69-aeef8cef6e80\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.034822 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6eb2408a-c785-4784-9f65-a2fe7d218903-image-import-ca\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.034871 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-audit-policies\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.035057 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.035563 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb2408a-c785-4784-9f65-a2fe7d218903-config\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.035620 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6eb2408a-c785-4784-9f65-a2fe7d218903-etcd-client\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.035627 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab00e40d-3300-4351-89df-203b1bf11d72-config\") pod \"console-operator-58897d9998-hxmkn\" (UID: \"ab00e40d-3300-4351-89df-203b1bf11d72\") " pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.035832 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/971b502e-8b71-404b-a7ca-58aa1894c648-client-ca\") pod \"route-controller-manager-6576b87f9c-8r7k5\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.036297 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c219134-328d-4145-8dd2-3f01df03a055-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ss5h8\" (UID: \"9c219134-328d-4145-8dd2-3f01df03a055\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.036325 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9c219134-328d-4145-8dd2-3f01df03a055-images\") pod \"machine-api-operator-5694c8668f-ss5h8\" (UID: \"9c219134-328d-4145-8dd2-3f01df03a055\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.036349 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6eb2408a-c785-4784-9f65-a2fe7d218903-audit\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.036951 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c219134-328d-4145-8dd2-3f01df03a055-config\") pod \"machine-api-operator-5694c8668f-ss5h8\" (UID: \"9c219134-328d-4145-8dd2-3f01df03a055\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.037255 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6eb2408a-c785-4784-9f65-a2fe7d218903-audit-dir\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.037480 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.037921 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971b502e-8b71-404b-a7ca-58aa1894c648-config\") pod \"route-controller-manager-6576b87f9c-8r7k5\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.037926 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-serving-cert\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.037992 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d10aa23f-eb67-42dc-84b9-9489eeac389e-machine-approver-tls\") pod \"machine-approver-56656f9798-qb8bs\" (UID: \"d10aa23f-eb67-42dc-84b9-9489eeac389e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.038066 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/971b502e-8b71-404b-a7ca-58aa1894c648-serving-cert\") pod \"route-controller-manager-6576b87f9c-8r7k5\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.038135 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d10aa23f-eb67-42dc-84b9-9489eeac389e-auth-proxy-config\") pod \"machine-approver-56656f9798-qb8bs\" (UID: \"d10aa23f-eb67-42dc-84b9-9489eeac389e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.039693 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab3b4047-952e-4f97-afb6-b7418db3519d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wsl5j\" (UID: \"ab3b4047-952e-4f97-afb6-b7418db3519d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wsl5j" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.040615 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b058206c-e1d2-41d2-ae2f-c428ad49eea4-proxy-tls\") pod \"machine-config-controller-84d6567774-bszqz\" (UID: \"b058206c-e1d2-41d2-ae2f-c428ad49eea4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.040691 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6eb2408a-c785-4784-9f65-a2fe7d218903-encryption-config\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.042507 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68ee95fd-840e-47fe-8c69-aeef8cef6e80-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2tsvd\" (UID: \"68ee95fd-840e-47fe-8c69-aeef8cef6e80\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.044977 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-etcd-client\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.055404 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.075533 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.094727 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.114329 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.122567 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99621db9-a20f-42b1-a788-a65ad55b6a52-serving-cert\") pod \"controller-manager-879f6c89f-rcgql\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.134929 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.154438 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.174485 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.183099 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-config\") pod \"controller-manager-879f6c89f-rcgql\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.194088 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.196948 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-client-ca\") pod \"controller-manager-879f6c89f-rcgql\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.221310 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.230192 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rcgql\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.233649 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.254290 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.262145 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/619d9117-d7de-4088-a239-bcf1b3560380-metrics-tls\") pod \"ingress-operator-5b745b69d9-mksjz\" (UID: \"619d9117-d7de-4088-a239-bcf1b3560380\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.273987 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.300340 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.307096 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/619d9117-d7de-4088-a239-bcf1b3560380-trusted-ca\") pod \"ingress-operator-5b745b69d9-mksjz\" (UID: \"619d9117-d7de-4088-a239-bcf1b3560380\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.313829 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.334333 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.343601 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3f2a4f-1918-41bd-b81e-662f947d63d3-serving-cert\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.353407 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.359919 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3f2a4f-1918-41bd-b81e-662f947d63d3-etcd-client\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.373878 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.379124 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3f2a4f-1918-41bd-b81e-662f947d63d3-config\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.394949 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.399067 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3f2a4f-1918-41bd-b81e-662f947d63d3-etcd-ca\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.414808 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.420053 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3f2a4f-1918-41bd-b81e-662f947d63d3-etcd-service-ca\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.435236 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.454547 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.474842 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.493819 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.513576 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.533651 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.554019 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.594419 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.614409 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.633769 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.654153 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.674742 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.694300 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.714435 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.735017 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.754117 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.773556 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.793497 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.811856 4713 request.go:700] Waited for 1.019017681s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&limit=500&resourceVersion=0 Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.813416 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.833462 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.854629 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.873441 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.913757 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.932895 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.954632 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.973165 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 15:36:06 crc kubenswrapper[4713]: I0126 15:36:06.994790 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.013846 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.034220 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.068325 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.074608 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.094200 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.140618 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hncj\" (UniqueName: \"kubernetes.io/projected/738df9e4-f531-420c-a4d6-2f3091d86068-kube-api-access-2hncj\") pod \"machine-config-operator-74547568cd-kwb58\" (UID: \"738df9e4-f531-420c-a4d6-2f3091d86068\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.160645 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhwbk\" (UniqueName: \"kubernetes.io/projected/9b229eeb-448b-4abe-9ba0-fe7dfc6e589e-kube-api-access-dhwbk\") pod \"downloads-7954f5f757-55x6b\" (UID: \"9b229eeb-448b-4abe-9ba0-fe7dfc6e589e\") " pod="openshift-console/downloads-7954f5f757-55x6b" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.184027 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mjc8\" (UniqueName: \"kubernetes.io/projected/619d9117-d7de-4088-a239-bcf1b3560380-kube-api-access-4mjc8\") pod \"ingress-operator-5b745b69d9-mksjz\" (UID: \"619d9117-d7de-4088-a239-bcf1b3560380\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.202536 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp5nq\" (UniqueName: \"kubernetes.io/projected/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-kube-api-access-vp5nq\") pod \"console-f9d7485db-p5wsk\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.222999 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq8qk\" (UniqueName: \"kubernetes.io/projected/6b3f2a4f-1918-41bd-b81e-662f947d63d3-kube-api-access-cq8qk\") pod \"etcd-operator-b45778765-kszgv\" (UID: \"6b3f2a4f-1918-41bd-b81e-662f947d63d3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.223420 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-55x6b" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.231531 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.234597 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7npd\" (UniqueName: \"kubernetes.io/projected/8814b03e-4835-4e4b-863b-acb4a7473f54-kube-api-access-f7npd\") pod \"authentication-operator-69f744f599-7fwh2\" (UID: \"8814b03e-4835-4e4b-863b-acb4a7473f54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.259413 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wqxr\" (UniqueName: \"kubernetes.io/projected/9a8716a7-082f-463c-9a07-da822550f992-kube-api-access-8wqxr\") pod \"openshift-apiserver-operator-796bbdcf4f-sbskr\" (UID: \"9a8716a7-082f-463c-9a07-da822550f992\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.273561 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.275298 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/619d9117-d7de-4088-a239-bcf1b3560380-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mksjz\" (UID: \"619d9117-d7de-4088-a239-bcf1b3560380\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.293802 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.302684 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.312971 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.314958 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.336096 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.348682 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.358341 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.375350 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.393957 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.418861 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.440339 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.454234 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.462392 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-55x6b"] Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.474833 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.476785 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.490339 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58"] Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.497793 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.506114 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.515173 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.536425 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.554607 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.557408 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kszgv"] Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.565547 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz"] Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.574723 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 15:36:07 crc kubenswrapper[4713]: W0126 15:36:07.592526 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b3f2a4f_1918_41bd_b81e_662f947d63d3.slice/crio-a6f5e57858b517ebc446c339beae79f0fe0cc29c38d2b65471aa996bcd9d1afa WatchSource:0}: Error finding container a6f5e57858b517ebc446c339beae79f0fe0cc29c38d2b65471aa996bcd9d1afa: Status 404 returned error can't find the container with id a6f5e57858b517ebc446c339beae79f0fe0cc29c38d2b65471aa996bcd9d1afa Jan 26 15:36:07 crc kubenswrapper[4713]: W0126 15:36:07.593082 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod619d9117_d7de_4088_a239_bcf1b3560380.slice/crio-97a02fa908856d66290061cf20ae0b0b26256072066500f580a7ece268183fdd WatchSource:0}: Error finding container 97a02fa908856d66290061cf20ae0b0b26256072066500f580a7ece268183fdd: Status 404 returned error can't find the container with id 97a02fa908856d66290061cf20ae0b0b26256072066500f580a7ece268183fdd Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.596690 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.615029 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.633925 4713 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.640988 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7fwh2"] Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.653833 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.654304 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-55x6b" event={"ID":"9b229eeb-448b-4abe-9ba0-fe7dfc6e589e","Type":"ContainerStarted","Data":"8e7e98a7f2cce5bc43fa6e35b474d7f7f8a25454e338d1ac614a704428970312"} Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.655576 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" event={"ID":"6b3f2a4f-1918-41bd-b81e-662f947d63d3","Type":"ContainerStarted","Data":"a6f5e57858b517ebc446c339beae79f0fe0cc29c38d2b65471aa996bcd9d1afa"} Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.656493 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" event={"ID":"738df9e4-f531-420c-a4d6-2f3091d86068","Type":"ContainerStarted","Data":"3c1379fbad55c476fdd9ee772073c2a3000cc96fbdc57c98aabe465ba0dbf0ab"} Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.657611 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" event={"ID":"619d9117-d7de-4088-a239-bcf1b3560380","Type":"ContainerStarted","Data":"97a02fa908856d66290061cf20ae0b0b26256072066500f580a7ece268183fdd"} Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.674289 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.687026 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-p5wsk"] Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.695248 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.714719 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.734015 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr"] Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.735312 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.754621 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.774093 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.793842 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.812130 4713 request.go:700] Waited for 1.814844708s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.814565 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.833860 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.854859 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.901286 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlql7\" (UniqueName: \"kubernetes.io/projected/3bd74e89-2dfb-4744-bf0c-7aedd0e799e0-kube-api-access-xlql7\") pod \"apiserver-7bbb656c7d-mxfmm\" (UID: \"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.921192 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7kb2\" (UniqueName: \"kubernetes.io/projected/9c219134-328d-4145-8dd2-3f01df03a055-kube-api-access-h7kb2\") pod \"machine-api-operator-5694c8668f-ss5h8\" (UID: \"9c219134-328d-4145-8dd2-3f01df03a055\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.931988 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f6ln\" (UniqueName: \"kubernetes.io/projected/971b502e-8b71-404b-a7ca-58aa1894c648-kube-api-access-4f6ln\") pod \"route-controller-manager-6576b87f9c-8r7k5\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.934636 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.950439 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hlbl\" (UniqueName: \"kubernetes.io/projected/ab00e40d-3300-4351-89df-203b1bf11d72-kube-api-access-6hlbl\") pod \"console-operator-58897d9998-hxmkn\" (UID: \"ab00e40d-3300-4351-89df-203b1bf11d72\") " pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.970467 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nc64\" (UniqueName: \"kubernetes.io/projected/ab3b4047-952e-4f97-afb6-b7418db3519d-kube-api-access-6nc64\") pod \"cluster-samples-operator-665b6dd947-wsl5j\" (UID: \"ab3b4047-952e-4f97-afb6-b7418db3519d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wsl5j" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.979791 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wsl5j" Jan 26 15:36:07 crc kubenswrapper[4713]: I0126 15:36:07.994162 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm9n9\" (UniqueName: \"kubernetes.io/projected/6eb2408a-c785-4784-9f65-a2fe7d218903-kube-api-access-cm9n9\") pod \"apiserver-76f77b778f-rfpbx\" (UID: \"6eb2408a-c785-4784-9f65-a2fe7d218903\") " pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.011658 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdcdv\" (UniqueName: \"kubernetes.io/projected/d10aa23f-eb67-42dc-84b9-9489eeac389e-kube-api-access-bdcdv\") pod \"machine-approver-56656f9798-qb8bs\" (UID: \"d10aa23f-eb67-42dc-84b9-9489eeac389e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.032601 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8n8s\" (UniqueName: \"kubernetes.io/projected/b058206c-e1d2-41d2-ae2f-c428ad49eea4-kube-api-access-z8n8s\") pod \"machine-config-controller-84d6567774-bszqz\" (UID: \"b058206c-e1d2-41d2-ae2f-c428ad49eea4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.050697 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj7gf\" (UniqueName: \"kubernetes.io/projected/68ee95fd-840e-47fe-8c69-aeef8cef6e80-kube-api-access-cj7gf\") pod \"openshift-controller-manager-operator-756b6f6bc6-2tsvd\" (UID: \"68ee95fd-840e-47fe-8c69-aeef8cef6e80\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.058695 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.071128 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-227r2\" (UniqueName: \"kubernetes.io/projected/99621db9-a20f-42b1-a788-a65ad55b6a52-kube-api-access-227r2\") pod \"controller-manager-879f6c89f-rcgql\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.088808 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bgnp\" (UniqueName: \"kubernetes.io/projected/05b2187e-ae3f-460d-8bd1-0d950c1e0535-kube-api-access-9bgnp\") pod \"openshift-config-operator-7777fb866f-f465s\" (UID: \"05b2187e-ae3f-460d-8bd1-0d950c1e0535\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.090651 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.834997 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.835098 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.835163 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.835286 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.836077 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.837263 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.837474 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.839065 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.839392 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-bound-sa-token\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.839435 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3e40f73a-b547-4c3f-a7a7-125032576150-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.839490 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqblt\" (UniqueName: \"kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-kube-api-access-dqblt\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: E0126 15:36:08.839528 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:09.339508432 +0000 UTC m=+144.476525667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.839589 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3e40f73a-b547-4c3f-a7a7-125032576150-registry-certificates\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.839619 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3e40f73a-b547-4c3f-a7a7-125032576150-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.839655 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-registry-tls\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.839675 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3e40f73a-b547-4c3f-a7a7-125032576150-trusted-ca\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: W0126 15:36:08.848461 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8814b03e_4835_4e4b_863b_acb4a7473f54.slice/crio-307dd7de46587da497fd015dfa7137b6974604ece1500e3ce49bcd8cbb1079c6 WatchSource:0}: Error finding container 307dd7de46587da497fd015dfa7137b6974604ece1500e3ce49bcd8cbb1079c6: Status 404 returned error can't find the container with id 307dd7de46587da497fd015dfa7137b6974604ece1500e3ce49bcd8cbb1079c6 Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.944078 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.944280 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwq25\" (UniqueName: \"kubernetes.io/projected/bda2ea83-a2b5-4d40-8362-6db587054562-kube-api-access-dwq25\") pod \"cluster-image-registry-operator-dc59b4c8b-ntrqf\" (UID: \"bda2ea83-a2b5-4d40-8362-6db587054562\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.944316 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/881498d7-4eaa-4654-8c22-61b0060761c0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qx8q5\" (UID: \"881498d7-4eaa-4654-8c22-61b0060761c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.944353 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.944396 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.944419 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:08 crc kubenswrapper[4713]: E0126 15:36:08.945104 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:09.445059802 +0000 UTC m=+144.582077077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.948828 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bda2ea83-a2b5-4d40-8362-6db587054562-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ntrqf\" (UID: \"bda2ea83-a2b5-4d40-8362-6db587054562\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.952782 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.952928 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cb5773d-d638-4e73-a955-b936c27c9d7f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plff7\" (UID: \"6cb5773d-d638-4e73-a955-b936c27c9d7f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.952956 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqj8q\" (UniqueName: \"kubernetes.io/projected/3c2e9103-9425-4cbd-8bb6-acf4aa336228-kube-api-access-dqj8q\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.953077 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c2e9103-9425-4cbd-8bb6-acf4aa336228-audit-dir\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.953122 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.953157 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-bound-sa-token\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.953181 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb5773d-d638-4e73-a955-b936c27c9d7f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plff7\" (UID: \"6cb5773d-d638-4e73-a955-b936c27c9d7f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.953205 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.953272 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bda2ea83-a2b5-4d40-8362-6db587054562-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ntrqf\" (UID: \"bda2ea83-a2b5-4d40-8362-6db587054562\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.953301 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3e40f73a-b547-4c3f-a7a7-125032576150-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.953411 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-audit-policies\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.953433 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.953471 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.953535 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/adfa00ba-2415-46e4-b252-dbe5a74ab837-metrics-tls\") pod \"dns-operator-744455d44c-dnw7f\" (UID: \"adfa00ba-2415-46e4-b252-dbe5a74ab837\") " pod="openshift-dns-operator/dns-operator-744455d44c-dnw7f" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.953595 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cb5773d-d638-4e73-a955-b936c27c9d7f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plff7\" (UID: \"6cb5773d-d638-4e73-a955-b936c27c9d7f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.953632 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqblt\" (UniqueName: \"kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-kube-api-access-dqblt\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.961918 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3e40f73a-b547-4c3f-a7a7-125032576150-registry-certificates\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: E0126 15:36:08.962207 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:09.462183689 +0000 UTC m=+144.599200984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.963104 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3e40f73a-b547-4c3f-a7a7-125032576150-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.963643 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/881498d7-4eaa-4654-8c22-61b0060761c0-config\") pod \"kube-controller-manager-operator-78b949d7b-qx8q5\" (UID: \"881498d7-4eaa-4654-8c22-61b0060761c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.963676 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.963729 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3e40f73a-b547-4c3f-a7a7-125032576150-registry-certificates\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.963831 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.963867 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/881498d7-4eaa-4654-8c22-61b0060761c0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qx8q5\" (UID: \"881498d7-4eaa-4654-8c22-61b0060761c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.965800 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjj26\" (UniqueName: \"kubernetes.io/projected/adfa00ba-2415-46e4-b252-dbe5a74ab837-kube-api-access-kjj26\") pod \"dns-operator-744455d44c-dnw7f\" (UID: \"adfa00ba-2415-46e4-b252-dbe5a74ab837\") " pod="openshift-dns-operator/dns-operator-744455d44c-dnw7f" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.966011 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bda2ea83-a2b5-4d40-8362-6db587054562-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ntrqf\" (UID: \"bda2ea83-a2b5-4d40-8362-6db587054562\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.966716 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-registry-tls\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.967644 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3e40f73a-b547-4c3f-a7a7-125032576150-trusted-ca\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.967703 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.967961 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.975757 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3e40f73a-b547-4c3f-a7a7-125032576150-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.984599 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3e40f73a-b547-4c3f-a7a7-125032576150-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.987977 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-registry-tls\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.990243 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3e40f73a-b547-4c3f-a7a7-125032576150-trusted-ca\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:08 crc kubenswrapper[4713]: I0126 15:36:08.993146 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-bound-sa-token\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.013544 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqblt\" (UniqueName: \"kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-kube-api-access-dqblt\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.069880 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.070296 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cb5773d-d638-4e73-a955-b936c27c9d7f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plff7\" (UID: \"6cb5773d-d638-4e73-a955-b936c27c9d7f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.070467 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7222f4f9-aa40-4909-a75e-70b5c1ef00fd-metrics-certs\") pod \"router-default-5444994796-lxzxj\" (UID: \"7222f4f9-aa40-4909-a75e-70b5c1ef00fd\") " pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.070543 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm56d\" (UniqueName: \"kubernetes.io/projected/c9e722bd-c443-4cb6-8104-e630a4c0b58f-kube-api-access-hm56d\") pod \"control-plane-machine-set-operator-78cbb6b69f-2rd4s\" (UID: \"c9e722bd-c443-4cb6-8104-e630a4c0b58f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2rd4s" Jan 26 15:36:09 crc kubenswrapper[4713]: E0126 15:36:09.073042 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:09.573007028 +0000 UTC m=+144.710024263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.070582 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f8886930-6560-40e0-bb1f-4b63bfd27a39-signing-cabundle\") pod \"service-ca-9c57cc56f-9hkc7\" (UID: \"f8886930-6560-40e0-bb1f-4b63bfd27a39\") " pod="openshift-service-ca/service-ca-9c57cc56f-9hkc7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075426 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c2e9103-9425-4cbd-8bb6-acf4aa336228-audit-dir\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075454 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7222f4f9-aa40-4909-a75e-70b5c1ef00fd-default-certificate\") pod \"router-default-5444994796-lxzxj\" (UID: \"7222f4f9-aa40-4909-a75e-70b5c1ef00fd\") " pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075473 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dbft\" (UniqueName: \"kubernetes.io/projected/a9491684-a2f5-4ec9-a42b-7db8021c410f-kube-api-access-8dbft\") pod \"multus-admission-controller-857f4d67dd-6h7k9\" (UID: \"a9491684-a2f5-4ec9-a42b-7db8021c410f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6h7k9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075503 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3200b97d-6535-4cfb-981c-aa18f461fff5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jrzf\" (UID: \"3200b97d-6535-4cfb-981c-aa18f461fff5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075540 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075566 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb5773d-d638-4e73-a955-b936c27c9d7f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plff7\" (UID: \"6cb5773d-d638-4e73-a955-b936c27c9d7f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075594 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075627 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b73b6008-1681-42fa-b5bb-771a022070d9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gcgb8\" (UID: \"b73b6008-1681-42fa-b5bb-771a022070d9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075658 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj62s\" (UniqueName: \"kubernetes.io/projected/f8886930-6560-40e0-bb1f-4b63bfd27a39-kube-api-access-kj62s\") pod \"service-ca-9c57cc56f-9hkc7\" (UID: \"f8886930-6560-40e0-bb1f-4b63bfd27a39\") " pod="openshift-service-ca/service-ca-9c57cc56f-9hkc7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075718 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/adfa00ba-2415-46e4-b252-dbe5a74ab837-metrics-tls\") pod \"dns-operator-744455d44c-dnw7f\" (UID: \"adfa00ba-2415-46e4-b252-dbe5a74ab837\") " pod="openshift-dns-operator/dns-operator-744455d44c-dnw7f" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075748 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075784 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e452036b-04a9-44f3-9401-e51bb17872cd-node-bootstrap-token\") pod \"machine-config-server-clnp7\" (UID: \"e452036b-04a9-44f3-9401-e51bb17872cd\") " pod="openshift-machine-config-operator/machine-config-server-clnp7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075812 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cb5773d-d638-4e73-a955-b936c27c9d7f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plff7\" (UID: \"6cb5773d-d638-4e73-a955-b936c27c9d7f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075849 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f8886930-6560-40e0-bb1f-4b63bfd27a39-signing-key\") pod \"service-ca-9c57cc56f-9hkc7\" (UID: \"f8886930-6560-40e0-bb1f-4b63bfd27a39\") " pod="openshift-service-ca/service-ca-9c57cc56f-9hkc7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075877 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xddf9\" (UniqueName: \"kubernetes.io/projected/43111b18-562c-46e1-be8e-56ed79f40d3b-kube-api-access-xddf9\") pod \"packageserver-d55dfcdfc-sfmgx\" (UID: \"43111b18-562c-46e1-be8e-56ed79f40d3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075903 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/43111b18-562c-46e1-be8e-56ed79f40d3b-tmpfs\") pod \"packageserver-d55dfcdfc-sfmgx\" (UID: \"43111b18-562c-46e1-be8e-56ed79f40d3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075934 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5c915437-e230-4e10-96d6-aa86c170f1b6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-kr92c\" (UID: \"5c915437-e230-4e10-96d6-aa86c170f1b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.075972 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtq2m\" (UniqueName: \"kubernetes.io/projected/e452036b-04a9-44f3-9401-e51bb17872cd-kube-api-access-mtq2m\") pod \"machine-config-server-clnp7\" (UID: \"e452036b-04a9-44f3-9401-e51bb17872cd\") " pod="openshift-machine-config-operator/machine-config-server-clnp7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076005 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3200b97d-6535-4cfb-981c-aa18f461fff5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jrzf\" (UID: \"3200b97d-6535-4cfb-981c-aa18f461fff5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076036 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/881498d7-4eaa-4654-8c22-61b0060761c0-config\") pod \"kube-controller-manager-operator-78b949d7b-qx8q5\" (UID: \"881498d7-4eaa-4654-8c22-61b0060761c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076063 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076086 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076107 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qw2v\" (UniqueName: \"kubernetes.io/projected/56af4faf-3bc5-4902-a06e-8e794a313d1c-kube-api-access-2qw2v\") pod \"package-server-manager-789f6589d5-qpq9h\" (UID: \"56af4faf-3bc5-4902-a06e-8e794a313d1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076129 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqrns\" (UniqueName: \"kubernetes.io/projected/cddd7638-43a1-43c9-9e72-62790d9d4e87-kube-api-access-bqrns\") pod \"service-ca-operator-777779d784-z58gc\" (UID: \"cddd7638-43a1-43c9-9e72-62790d9d4e87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z58gc" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076153 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/881498d7-4eaa-4654-8c22-61b0060761c0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qx8q5\" (UID: \"881498d7-4eaa-4654-8c22-61b0060761c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076172 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9e722bd-c443-4cb6-8104-e630a4c0b58f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2rd4s\" (UID: \"c9e722bd-c443-4cb6-8104-e630a4c0b58f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2rd4s" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076195 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ca27415a-5c07-49c1-be23-8ab77740e240-plugins-dir\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076223 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjj26\" (UniqueName: \"kubernetes.io/projected/adfa00ba-2415-46e4-b252-dbe5a74ab837-kube-api-access-kjj26\") pod \"dns-operator-744455d44c-dnw7f\" (UID: \"adfa00ba-2415-46e4-b252-dbe5a74ab837\") " pod="openshift-dns-operator/dns-operator-744455d44c-dnw7f" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076241 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bda2ea83-a2b5-4d40-8362-6db587054562-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ntrqf\" (UID: \"bda2ea83-a2b5-4d40-8362-6db587054562\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076262 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ca27415a-5c07-49c1-be23-8ab77740e240-registration-dir\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076285 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b6sn\" (UniqueName: \"kubernetes.io/projected/f8681ae0-298b-45e5-bef9-4dcb591bd1ec-kube-api-access-5b6sn\") pod \"catalog-operator-68c6474976-k94zt\" (UID: \"f8681ae0-298b-45e5-bef9-4dcb591bd1ec\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076307 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/56af4faf-3bc5-4902-a06e-8e794a313d1c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qpq9h\" (UID: \"56af4faf-3bc5-4902-a06e-8e794a313d1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076326 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73b6008-1681-42fa-b5bb-771a022070d9-config\") pod \"kube-apiserver-operator-766d6c64bb-gcgb8\" (UID: \"b73b6008-1681-42fa-b5bb-771a022070d9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076343 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca13e433-706e-4733-97e9-5ef2af9d4d19-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-574q9\" (UID: \"ca13e433-706e-4733-97e9-5ef2af9d4d19\") " pod="openshift-marketplace/marketplace-operator-79b997595-574q9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076395 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076414 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8slbc\" (UniqueName: \"kubernetes.io/projected/dea92962-ec74-4c08-a114-63075ee610aa-kube-api-access-8slbc\") pod \"dns-default-l6pr5\" (UID: \"dea92962-ec74-4c08-a114-63075ee610aa\") " pod="openshift-dns/dns-default-l6pr5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076430 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nmn7\" (UniqueName: \"kubernetes.io/projected/ca13e433-706e-4733-97e9-5ef2af9d4d19-kube-api-access-8nmn7\") pod \"marketplace-operator-79b997595-574q9\" (UID: \"ca13e433-706e-4733-97e9-5ef2af9d4d19\") " pod="openshift-marketplace/marketplace-operator-79b997595-574q9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076450 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9p6\" (UniqueName: \"kubernetes.io/projected/9074ebe7-3cae-403d-8152-d99fbcbfdf2b-kube-api-access-zm9p6\") pod \"migrator-59844c95c7-n4756\" (UID: \"9074ebe7-3cae-403d-8152-d99fbcbfdf2b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n4756" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076468 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/881498d7-4eaa-4654-8c22-61b0060761c0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qx8q5\" (UID: \"881498d7-4eaa-4654-8c22-61b0060761c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076487 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076506 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/43111b18-562c-46e1-be8e-56ed79f40d3b-apiservice-cert\") pod \"packageserver-d55dfcdfc-sfmgx\" (UID: \"43111b18-562c-46e1-be8e-56ed79f40d3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076571 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076593 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cddd7638-43a1-43c9-9e72-62790d9d4e87-serving-cert\") pod \"service-ca-operator-777779d784-z58gc\" (UID: \"cddd7638-43a1-43c9-9e72-62790d9d4e87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z58gc" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076611 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bda2ea83-a2b5-4d40-8362-6db587054562-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ntrqf\" (UID: \"bda2ea83-a2b5-4d40-8362-6db587054562\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076626 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqj8q\" (UniqueName: \"kubernetes.io/projected/3c2e9103-9425-4cbd-8bb6-acf4aa336228-kube-api-access-dqj8q\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076641 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-466gm\" (UniqueName: \"kubernetes.io/projected/ea7b898f-b55e-47f8-ab80-8a425c57699b-kube-api-access-466gm\") pod \"ingress-canary-9c65z\" (UID: \"ea7b898f-b55e-47f8-ab80-8a425c57699b\") " pod="openshift-ingress-canary/ingress-canary-9c65z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076659 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8681ae0-298b-45e5-bef9-4dcb591bd1ec-srv-cert\") pod \"catalog-operator-68c6474976-k94zt\" (UID: \"f8681ae0-298b-45e5-bef9-4dcb591bd1ec\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076676 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cddd7638-43a1-43c9-9e72-62790d9d4e87-config\") pod \"service-ca-operator-777779d784-z58gc\" (UID: \"cddd7638-43a1-43c9-9e72-62790d9d4e87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z58gc" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076693 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-secret-volume\") pod \"collect-profiles-29490690-wnmzb\" (UID: \"0afe1ab0-3817-4d66-aaf9-e99181ae0a55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076714 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7222f4f9-aa40-4909-a75e-70b5c1ef00fd-stats-auth\") pod \"router-default-5444994796-lxzxj\" (UID: \"7222f4f9-aa40-4909-a75e-70b5c1ef00fd\") " pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076734 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bda2ea83-a2b5-4d40-8362-6db587054562-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ntrqf\" (UID: \"bda2ea83-a2b5-4d40-8362-6db587054562\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076755 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea7b898f-b55e-47f8-ab80-8a425c57699b-cert\") pod \"ingress-canary-9c65z\" (UID: \"ea7b898f-b55e-47f8-ab80-8a425c57699b\") " pod="openshift-ingress-canary/ingress-canary-9c65z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076788 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076806 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-audit-policies\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076821 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/43111b18-562c-46e1-be8e-56ed79f40d3b-webhook-cert\") pod \"packageserver-d55dfcdfc-sfmgx\" (UID: \"43111b18-562c-46e1-be8e-56ed79f40d3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076837 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ca27415a-5c07-49c1-be23-8ab77740e240-csi-data-dir\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076856 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7222f4f9-aa40-4909-a75e-70b5c1ef00fd-service-ca-bundle\") pod \"router-default-5444994796-lxzxj\" (UID: \"7222f4f9-aa40-4909-a75e-70b5c1ef00fd\") " pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076879 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ca27415a-5c07-49c1-be23-8ab77740e240-mountpoint-dir\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076901 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g99kb\" (UniqueName: \"kubernetes.io/projected/3200b97d-6535-4cfb-981c-aa18f461fff5-kube-api-access-g99kb\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jrzf\" (UID: \"3200b97d-6535-4cfb-981c-aa18f461fff5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076920 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea92962-ec74-4c08-a114-63075ee610aa-config-volume\") pod \"dns-default-l6pr5\" (UID: \"dea92962-ec74-4c08-a114-63075ee610aa\") " pod="openshift-dns/dns-default-l6pr5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076943 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b73b6008-1681-42fa-b5bb-771a022070d9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gcgb8\" (UID: \"b73b6008-1681-42fa-b5bb-771a022070d9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076962 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca13e433-706e-4733-97e9-5ef2af9d4d19-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-574q9\" (UID: \"ca13e433-706e-4733-97e9-5ef2af9d4d19\") " pod="openshift-marketplace/marketplace-operator-79b997595-574q9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076981 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsn77\" (UniqueName: \"kubernetes.io/projected/5c915437-e230-4e10-96d6-aa86c170f1b6-kube-api-access-xsn77\") pod \"olm-operator-6b444d44fb-kr92c\" (UID: \"5c915437-e230-4e10-96d6-aa86c170f1b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.076997 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dea92962-ec74-4c08-a114-63075ee610aa-metrics-tls\") pod \"dns-default-l6pr5\" (UID: \"dea92962-ec74-4c08-a114-63075ee610aa\") " pod="openshift-dns/dns-default-l6pr5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.077014 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5jr9\" (UniqueName: \"kubernetes.io/projected/7222f4f9-aa40-4909-a75e-70b5c1ef00fd-kube-api-access-m5jr9\") pod \"router-default-5444994796-lxzxj\" (UID: \"7222f4f9-aa40-4909-a75e-70b5c1ef00fd\") " pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.077034 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9491684-a2f5-4ec9-a42b-7db8021c410f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6h7k9\" (UID: \"a9491684-a2f5-4ec9-a42b-7db8021c410f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6h7k9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.077052 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-config-volume\") pod \"collect-profiles-29490690-wnmzb\" (UID: \"0afe1ab0-3817-4d66-aaf9-e99181ae0a55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.077075 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8681ae0-298b-45e5-bef9-4dcb591bd1ec-profile-collector-cert\") pod \"catalog-operator-68c6474976-k94zt\" (UID: \"f8681ae0-298b-45e5-bef9-4dcb591bd1ec\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.077091 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e452036b-04a9-44f3-9401-e51bb17872cd-certs\") pod \"machine-config-server-clnp7\" (UID: \"e452036b-04a9-44f3-9401-e51bb17872cd\") " pod="openshift-machine-config-operator/machine-config-server-clnp7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.077105 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5c915437-e230-4e10-96d6-aa86c170f1b6-srv-cert\") pod \"olm-operator-6b444d44fb-kr92c\" (UID: \"5c915437-e230-4e10-96d6-aa86c170f1b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.077119 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8zd5\" (UniqueName: \"kubernetes.io/projected/ca27415a-5c07-49c1-be23-8ab77740e240-kube-api-access-h8zd5\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.077137 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.077153 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n4lh\" (UniqueName: \"kubernetes.io/projected/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-kube-api-access-7n4lh\") pod \"collect-profiles-29490690-wnmzb\" (UID: \"0afe1ab0-3817-4d66-aaf9-e99181ae0a55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.077169 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ca27415a-5c07-49c1-be23-8ab77740e240-socket-dir\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.077226 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwq25\" (UniqueName: \"kubernetes.io/projected/bda2ea83-a2b5-4d40-8362-6db587054562-kube-api-access-dwq25\") pod \"cluster-image-registry-operator-dc59b4c8b-ntrqf\" (UID: \"bda2ea83-a2b5-4d40-8362-6db587054562\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.077243 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.077259 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.082478 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.082542 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.089009 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.089681 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c2e9103-9425-4cbd-8bb6-acf4aa336228-audit-dir\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: E0126 15:36:09.090266 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:09.590173536 +0000 UTC m=+144.727190771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.091161 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.091385 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb5773d-d638-4e73-a955-b936c27c9d7f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plff7\" (UID: \"6cb5773d-d638-4e73-a955-b936c27c9d7f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.092669 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/881498d7-4eaa-4654-8c22-61b0060761c0-config\") pod \"kube-controller-manager-operator-78b949d7b-qx8q5\" (UID: \"881498d7-4eaa-4654-8c22-61b0060761c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.093652 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.095389 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cb5773d-d638-4e73-a955-b936c27c9d7f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plff7\" (UID: \"6cb5773d-d638-4e73-a955-b936c27c9d7f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.098764 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-audit-policies\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.108115 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.112582 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bda2ea83-a2b5-4d40-8362-6db587054562-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ntrqf\" (UID: \"bda2ea83-a2b5-4d40-8362-6db587054562\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.119387 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bda2ea83-a2b5-4d40-8362-6db587054562-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ntrqf\" (UID: \"bda2ea83-a2b5-4d40-8362-6db587054562\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.121035 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.122416 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cb5773d-d638-4e73-a955-b936c27c9d7f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plff7\" (UID: \"6cb5773d-d638-4e73-a955-b936c27c9d7f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.126918 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/881498d7-4eaa-4654-8c22-61b0060761c0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qx8q5\" (UID: \"881498d7-4eaa-4654-8c22-61b0060761c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.127230 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/adfa00ba-2415-46e4-b252-dbe5a74ab837-metrics-tls\") pod \"dns-operator-744455d44c-dnw7f\" (UID: \"adfa00ba-2415-46e4-b252-dbe5a74ab837\") " pod="openshift-dns-operator/dns-operator-744455d44c-dnw7f" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.127355 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.129702 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.138181 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.160221 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.160523 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/881498d7-4eaa-4654-8c22-61b0060761c0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qx8q5\" (UID: \"881498d7-4eaa-4654-8c22-61b0060761c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.166565 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjj26\" (UniqueName: \"kubernetes.io/projected/adfa00ba-2415-46e4-b252-dbe5a74ab837-kube-api-access-kjj26\") pod \"dns-operator-744455d44c-dnw7f\" (UID: \"adfa00ba-2415-46e4-b252-dbe5a74ab837\") " pod="openshift-dns-operator/dns-operator-744455d44c-dnw7f" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178144 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178420 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7222f4f9-aa40-4909-a75e-70b5c1ef00fd-metrics-certs\") pod \"router-default-5444994796-lxzxj\" (UID: \"7222f4f9-aa40-4909-a75e-70b5c1ef00fd\") " pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178457 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm56d\" (UniqueName: \"kubernetes.io/projected/c9e722bd-c443-4cb6-8104-e630a4c0b58f-kube-api-access-hm56d\") pod \"control-plane-machine-set-operator-78cbb6b69f-2rd4s\" (UID: \"c9e722bd-c443-4cb6-8104-e630a4c0b58f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2rd4s" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178482 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f8886930-6560-40e0-bb1f-4b63bfd27a39-signing-cabundle\") pod \"service-ca-9c57cc56f-9hkc7\" (UID: \"f8886930-6560-40e0-bb1f-4b63bfd27a39\") " pod="openshift-service-ca/service-ca-9c57cc56f-9hkc7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178508 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7222f4f9-aa40-4909-a75e-70b5c1ef00fd-default-certificate\") pod \"router-default-5444994796-lxzxj\" (UID: \"7222f4f9-aa40-4909-a75e-70b5c1ef00fd\") " pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178530 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dbft\" (UniqueName: \"kubernetes.io/projected/a9491684-a2f5-4ec9-a42b-7db8021c410f-kube-api-access-8dbft\") pod \"multus-admission-controller-857f4d67dd-6h7k9\" (UID: \"a9491684-a2f5-4ec9-a42b-7db8021c410f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6h7k9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178548 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3200b97d-6535-4cfb-981c-aa18f461fff5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jrzf\" (UID: \"3200b97d-6535-4cfb-981c-aa18f461fff5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178584 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b73b6008-1681-42fa-b5bb-771a022070d9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gcgb8\" (UID: \"b73b6008-1681-42fa-b5bb-771a022070d9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178608 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj62s\" (UniqueName: \"kubernetes.io/projected/f8886930-6560-40e0-bb1f-4b63bfd27a39-kube-api-access-kj62s\") pod \"service-ca-9c57cc56f-9hkc7\" (UID: \"f8886930-6560-40e0-bb1f-4b63bfd27a39\") " pod="openshift-service-ca/service-ca-9c57cc56f-9hkc7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178643 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e452036b-04a9-44f3-9401-e51bb17872cd-node-bootstrap-token\") pod \"machine-config-server-clnp7\" (UID: \"e452036b-04a9-44f3-9401-e51bb17872cd\") " pod="openshift-machine-config-operator/machine-config-server-clnp7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178667 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f8886930-6560-40e0-bb1f-4b63bfd27a39-signing-key\") pod \"service-ca-9c57cc56f-9hkc7\" (UID: \"f8886930-6560-40e0-bb1f-4b63bfd27a39\") " pod="openshift-service-ca/service-ca-9c57cc56f-9hkc7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178688 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xddf9\" (UniqueName: \"kubernetes.io/projected/43111b18-562c-46e1-be8e-56ed79f40d3b-kube-api-access-xddf9\") pod \"packageserver-d55dfcdfc-sfmgx\" (UID: \"43111b18-562c-46e1-be8e-56ed79f40d3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178714 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/43111b18-562c-46e1-be8e-56ed79f40d3b-tmpfs\") pod \"packageserver-d55dfcdfc-sfmgx\" (UID: \"43111b18-562c-46e1-be8e-56ed79f40d3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178736 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5c915437-e230-4e10-96d6-aa86c170f1b6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-kr92c\" (UID: \"5c915437-e230-4e10-96d6-aa86c170f1b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178765 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtq2m\" (UniqueName: \"kubernetes.io/projected/e452036b-04a9-44f3-9401-e51bb17872cd-kube-api-access-mtq2m\") pod \"machine-config-server-clnp7\" (UID: \"e452036b-04a9-44f3-9401-e51bb17872cd\") " pod="openshift-machine-config-operator/machine-config-server-clnp7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178787 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3200b97d-6535-4cfb-981c-aa18f461fff5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jrzf\" (UID: \"3200b97d-6535-4cfb-981c-aa18f461fff5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178810 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qw2v\" (UniqueName: \"kubernetes.io/projected/56af4faf-3bc5-4902-a06e-8e794a313d1c-kube-api-access-2qw2v\") pod \"package-server-manager-789f6589d5-qpq9h\" (UID: \"56af4faf-3bc5-4902-a06e-8e794a313d1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178827 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqrns\" (UniqueName: \"kubernetes.io/projected/cddd7638-43a1-43c9-9e72-62790d9d4e87-kube-api-access-bqrns\") pod \"service-ca-operator-777779d784-z58gc\" (UID: \"cddd7638-43a1-43c9-9e72-62790d9d4e87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z58gc" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178845 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9e722bd-c443-4cb6-8104-e630a4c0b58f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2rd4s\" (UID: \"c9e722bd-c443-4cb6-8104-e630a4c0b58f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2rd4s" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178865 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ca27415a-5c07-49c1-be23-8ab77740e240-plugins-dir\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178889 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ca27415a-5c07-49c1-be23-8ab77740e240-registration-dir\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178908 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b6sn\" (UniqueName: \"kubernetes.io/projected/f8681ae0-298b-45e5-bef9-4dcb591bd1ec-kube-api-access-5b6sn\") pod \"catalog-operator-68c6474976-k94zt\" (UID: \"f8681ae0-298b-45e5-bef9-4dcb591bd1ec\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178923 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/56af4faf-3bc5-4902-a06e-8e794a313d1c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qpq9h\" (UID: \"56af4faf-3bc5-4902-a06e-8e794a313d1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178942 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73b6008-1681-42fa-b5bb-771a022070d9-config\") pod \"kube-apiserver-operator-766d6c64bb-gcgb8\" (UID: \"b73b6008-1681-42fa-b5bb-771a022070d9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178961 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca13e433-706e-4733-97e9-5ef2af9d4d19-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-574q9\" (UID: \"ca13e433-706e-4733-97e9-5ef2af9d4d19\") " pod="openshift-marketplace/marketplace-operator-79b997595-574q9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178981 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8slbc\" (UniqueName: \"kubernetes.io/projected/dea92962-ec74-4c08-a114-63075ee610aa-kube-api-access-8slbc\") pod \"dns-default-l6pr5\" (UID: \"dea92962-ec74-4c08-a114-63075ee610aa\") " pod="openshift-dns/dns-default-l6pr5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.178998 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nmn7\" (UniqueName: \"kubernetes.io/projected/ca13e433-706e-4733-97e9-5ef2af9d4d19-kube-api-access-8nmn7\") pod \"marketplace-operator-79b997595-574q9\" (UID: \"ca13e433-706e-4733-97e9-5ef2af9d4d19\") " pod="openshift-marketplace/marketplace-operator-79b997595-574q9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179014 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9p6\" (UniqueName: \"kubernetes.io/projected/9074ebe7-3cae-403d-8152-d99fbcbfdf2b-kube-api-access-zm9p6\") pod \"migrator-59844c95c7-n4756\" (UID: \"9074ebe7-3cae-403d-8152-d99fbcbfdf2b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n4756" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179033 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/43111b18-562c-46e1-be8e-56ed79f40d3b-apiservice-cert\") pod \"packageserver-d55dfcdfc-sfmgx\" (UID: \"43111b18-562c-46e1-be8e-56ed79f40d3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179054 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cddd7638-43a1-43c9-9e72-62790d9d4e87-serving-cert\") pod \"service-ca-operator-777779d784-z58gc\" (UID: \"cddd7638-43a1-43c9-9e72-62790d9d4e87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z58gc" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179076 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-466gm\" (UniqueName: \"kubernetes.io/projected/ea7b898f-b55e-47f8-ab80-8a425c57699b-kube-api-access-466gm\") pod \"ingress-canary-9c65z\" (UID: \"ea7b898f-b55e-47f8-ab80-8a425c57699b\") " pod="openshift-ingress-canary/ingress-canary-9c65z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179091 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8681ae0-298b-45e5-bef9-4dcb591bd1ec-srv-cert\") pod \"catalog-operator-68c6474976-k94zt\" (UID: \"f8681ae0-298b-45e5-bef9-4dcb591bd1ec\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179105 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cddd7638-43a1-43c9-9e72-62790d9d4e87-config\") pod \"service-ca-operator-777779d784-z58gc\" (UID: \"cddd7638-43a1-43c9-9e72-62790d9d4e87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z58gc" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179121 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-secret-volume\") pod \"collect-profiles-29490690-wnmzb\" (UID: \"0afe1ab0-3817-4d66-aaf9-e99181ae0a55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179139 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7222f4f9-aa40-4909-a75e-70b5c1ef00fd-stats-auth\") pod \"router-default-5444994796-lxzxj\" (UID: \"7222f4f9-aa40-4909-a75e-70b5c1ef00fd\") " pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179157 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea7b898f-b55e-47f8-ab80-8a425c57699b-cert\") pod \"ingress-canary-9c65z\" (UID: \"ea7b898f-b55e-47f8-ab80-8a425c57699b\") " pod="openshift-ingress-canary/ingress-canary-9c65z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179183 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/43111b18-562c-46e1-be8e-56ed79f40d3b-webhook-cert\") pod \"packageserver-d55dfcdfc-sfmgx\" (UID: \"43111b18-562c-46e1-be8e-56ed79f40d3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179198 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ca27415a-5c07-49c1-be23-8ab77740e240-csi-data-dir\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179212 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7222f4f9-aa40-4909-a75e-70b5c1ef00fd-service-ca-bundle\") pod \"router-default-5444994796-lxzxj\" (UID: \"7222f4f9-aa40-4909-a75e-70b5c1ef00fd\") " pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179234 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ca27415a-5c07-49c1-be23-8ab77740e240-mountpoint-dir\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179254 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g99kb\" (UniqueName: \"kubernetes.io/projected/3200b97d-6535-4cfb-981c-aa18f461fff5-kube-api-access-g99kb\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jrzf\" (UID: \"3200b97d-6535-4cfb-981c-aa18f461fff5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179270 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea92962-ec74-4c08-a114-63075ee610aa-config-volume\") pod \"dns-default-l6pr5\" (UID: \"dea92962-ec74-4c08-a114-63075ee610aa\") " pod="openshift-dns/dns-default-l6pr5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179288 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b73b6008-1681-42fa-b5bb-771a022070d9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gcgb8\" (UID: \"b73b6008-1681-42fa-b5bb-771a022070d9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179306 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca13e433-706e-4733-97e9-5ef2af9d4d19-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-574q9\" (UID: \"ca13e433-706e-4733-97e9-5ef2af9d4d19\") " pod="openshift-marketplace/marketplace-operator-79b997595-574q9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179322 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsn77\" (UniqueName: \"kubernetes.io/projected/5c915437-e230-4e10-96d6-aa86c170f1b6-kube-api-access-xsn77\") pod \"olm-operator-6b444d44fb-kr92c\" (UID: \"5c915437-e230-4e10-96d6-aa86c170f1b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179336 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dea92962-ec74-4c08-a114-63075ee610aa-metrics-tls\") pod \"dns-default-l6pr5\" (UID: \"dea92962-ec74-4c08-a114-63075ee610aa\") " pod="openshift-dns/dns-default-l6pr5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179352 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5jr9\" (UniqueName: \"kubernetes.io/projected/7222f4f9-aa40-4909-a75e-70b5c1ef00fd-kube-api-access-m5jr9\") pod \"router-default-5444994796-lxzxj\" (UID: \"7222f4f9-aa40-4909-a75e-70b5c1ef00fd\") " pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179396 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9491684-a2f5-4ec9-a42b-7db8021c410f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6h7k9\" (UID: \"a9491684-a2f5-4ec9-a42b-7db8021c410f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6h7k9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179412 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-config-volume\") pod \"collect-profiles-29490690-wnmzb\" (UID: \"0afe1ab0-3817-4d66-aaf9-e99181ae0a55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179428 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8681ae0-298b-45e5-bef9-4dcb591bd1ec-profile-collector-cert\") pod \"catalog-operator-68c6474976-k94zt\" (UID: \"f8681ae0-298b-45e5-bef9-4dcb591bd1ec\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179444 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e452036b-04a9-44f3-9401-e51bb17872cd-certs\") pod \"machine-config-server-clnp7\" (UID: \"e452036b-04a9-44f3-9401-e51bb17872cd\") " pod="openshift-machine-config-operator/machine-config-server-clnp7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179461 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5c915437-e230-4e10-96d6-aa86c170f1b6-srv-cert\") pod \"olm-operator-6b444d44fb-kr92c\" (UID: \"5c915437-e230-4e10-96d6-aa86c170f1b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.179491 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8zd5\" (UniqueName: \"kubernetes.io/projected/ca27415a-5c07-49c1-be23-8ab77740e240-kube-api-access-h8zd5\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: E0126 15:36:09.181834 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:09.681726638 +0000 UTC m=+144.818743873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.186465 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n4lh\" (UniqueName: \"kubernetes.io/projected/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-kube-api-access-7n4lh\") pod \"collect-profiles-29490690-wnmzb\" (UID: \"0afe1ab0-3817-4d66-aaf9-e99181ae0a55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.186534 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ca27415a-5c07-49c1-be23-8ab77740e240-socket-dir\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.187077 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ca27415a-5c07-49c1-be23-8ab77740e240-socket-dir\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.187203 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7222f4f9-aa40-4909-a75e-70b5c1ef00fd-metrics-certs\") pod \"router-default-5444994796-lxzxj\" (UID: \"7222f4f9-aa40-4909-a75e-70b5c1ef00fd\") " pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.187860 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ca27415a-5c07-49c1-be23-8ab77740e240-plugins-dir\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.188843 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cddd7638-43a1-43c9-9e72-62790d9d4e87-config\") pod \"service-ca-operator-777779d784-z58gc\" (UID: \"cddd7638-43a1-43c9-9e72-62790d9d4e87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z58gc" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.191875 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8681ae0-298b-45e5-bef9-4dcb591bd1ec-srv-cert\") pod \"catalog-operator-68c6474976-k94zt\" (UID: \"f8681ae0-298b-45e5-bef9-4dcb591bd1ec\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.195173 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f8886930-6560-40e0-bb1f-4b63bfd27a39-signing-cabundle\") pod \"service-ca-9c57cc56f-9hkc7\" (UID: \"f8886930-6560-40e0-bb1f-4b63bfd27a39\") " pod="openshift-service-ca/service-ca-9c57cc56f-9hkc7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.195969 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e452036b-04a9-44f3-9401-e51bb17872cd-node-bootstrap-token\") pod \"machine-config-server-clnp7\" (UID: \"e452036b-04a9-44f3-9401-e51bb17872cd\") " pod="openshift-machine-config-operator/machine-config-server-clnp7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.197673 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/43111b18-562c-46e1-be8e-56ed79f40d3b-tmpfs\") pod \"packageserver-d55dfcdfc-sfmgx\" (UID: \"43111b18-562c-46e1-be8e-56ed79f40d3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.198685 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ca27415a-5c07-49c1-be23-8ab77740e240-registration-dir\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.199179 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3200b97d-6535-4cfb-981c-aa18f461fff5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jrzf\" (UID: \"3200b97d-6535-4cfb-981c-aa18f461fff5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.199728 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea92962-ec74-4c08-a114-63075ee610aa-config-volume\") pod \"dns-default-l6pr5\" (UID: \"dea92962-ec74-4c08-a114-63075ee610aa\") " pod="openshift-dns/dns-default-l6pr5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.200232 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ca27415a-5c07-49c1-be23-8ab77740e240-csi-data-dir\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.200258 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-config-volume\") pod \"collect-profiles-29490690-wnmzb\" (UID: \"0afe1ab0-3817-4d66-aaf9-e99181ae0a55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.203122 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-secret-volume\") pod \"collect-profiles-29490690-wnmzb\" (UID: \"0afe1ab0-3817-4d66-aaf9-e99181ae0a55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.204642 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca13e433-706e-4733-97e9-5ef2af9d4d19-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-574q9\" (UID: \"ca13e433-706e-4733-97e9-5ef2af9d4d19\") " pod="openshift-marketplace/marketplace-operator-79b997595-574q9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.204895 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73b6008-1681-42fa-b5bb-771a022070d9-config\") pod \"kube-apiserver-operator-766d6c64bb-gcgb8\" (UID: \"b73b6008-1681-42fa-b5bb-771a022070d9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.207170 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7222f4f9-aa40-4909-a75e-70b5c1ef00fd-service-ca-bundle\") pod \"router-default-5444994796-lxzxj\" (UID: \"7222f4f9-aa40-4909-a75e-70b5c1ef00fd\") " pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.207317 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bda2ea83-a2b5-4d40-8362-6db587054562-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ntrqf\" (UID: \"bda2ea83-a2b5-4d40-8362-6db587054562\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.207709 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cddd7638-43a1-43c9-9e72-62790d9d4e87-serving-cert\") pod \"service-ca-operator-777779d784-z58gc\" (UID: \"cddd7638-43a1-43c9-9e72-62790d9d4e87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z58gc" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.212658 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5c915437-e230-4e10-96d6-aa86c170f1b6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-kr92c\" (UID: \"5c915437-e230-4e10-96d6-aa86c170f1b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.216889 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f8886930-6560-40e0-bb1f-4b63bfd27a39-signing-key\") pod \"service-ca-9c57cc56f-9hkc7\" (UID: \"f8886930-6560-40e0-bb1f-4b63bfd27a39\") " pod="openshift-service-ca/service-ca-9c57cc56f-9hkc7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.220774 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e452036b-04a9-44f3-9401-e51bb17872cd-certs\") pod \"machine-config-server-clnp7\" (UID: \"e452036b-04a9-44f3-9401-e51bb17872cd\") " pod="openshift-machine-config-operator/machine-config-server-clnp7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.220889 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ca27415a-5c07-49c1-be23-8ab77740e240-mountpoint-dir\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.221326 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9491684-a2f5-4ec9-a42b-7db8021c410f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6h7k9\" (UID: \"a9491684-a2f5-4ec9-a42b-7db8021c410f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6h7k9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.221737 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dea92962-ec74-4c08-a114-63075ee610aa-metrics-tls\") pod \"dns-default-l6pr5\" (UID: \"dea92962-ec74-4c08-a114-63075ee610aa\") " pod="openshift-dns/dns-default-l6pr5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.222302 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/43111b18-562c-46e1-be8e-56ed79f40d3b-webhook-cert\") pod \"packageserver-d55dfcdfc-sfmgx\" (UID: \"43111b18-562c-46e1-be8e-56ed79f40d3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.222852 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7222f4f9-aa40-4909-a75e-70b5c1ef00fd-stats-auth\") pod \"router-default-5444994796-lxzxj\" (UID: \"7222f4f9-aa40-4909-a75e-70b5c1ef00fd\") " pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.223831 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca13e433-706e-4733-97e9-5ef2af9d4d19-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-574q9\" (UID: \"ca13e433-706e-4733-97e9-5ef2af9d4d19\") " pod="openshift-marketplace/marketplace-operator-79b997595-574q9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.224847 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5c915437-e230-4e10-96d6-aa86c170f1b6-srv-cert\") pod \"olm-operator-6b444d44fb-kr92c\" (UID: \"5c915437-e230-4e10-96d6-aa86c170f1b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.228071 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8681ae0-298b-45e5-bef9-4dcb591bd1ec-profile-collector-cert\") pod \"catalog-operator-68c6474976-k94zt\" (UID: \"f8681ae0-298b-45e5-bef9-4dcb591bd1ec\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.228621 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea7b898f-b55e-47f8-ab80-8a425c57699b-cert\") pod \"ingress-canary-9c65z\" (UID: \"ea7b898f-b55e-47f8-ab80-8a425c57699b\") " pod="openshift-ingress-canary/ingress-canary-9c65z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.228867 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3200b97d-6535-4cfb-981c-aa18f461fff5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jrzf\" (UID: \"3200b97d-6535-4cfb-981c-aa18f461fff5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.229087 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/56af4faf-3bc5-4902-a06e-8e794a313d1c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qpq9h\" (UID: \"56af4faf-3bc5-4902-a06e-8e794a313d1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.235838 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwq25\" (UniqueName: \"kubernetes.io/projected/bda2ea83-a2b5-4d40-8362-6db587054562-kube-api-access-dwq25\") pod \"cluster-image-registry-operator-dc59b4c8b-ntrqf\" (UID: \"bda2ea83-a2b5-4d40-8362-6db587054562\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.242336 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9e722bd-c443-4cb6-8104-e630a4c0b58f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2rd4s\" (UID: \"c9e722bd-c443-4cb6-8104-e630a4c0b58f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2rd4s" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.242441 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7222f4f9-aa40-4909-a75e-70b5c1ef00fd-default-certificate\") pod \"router-default-5444994796-lxzxj\" (UID: \"7222f4f9-aa40-4909-a75e-70b5c1ef00fd\") " pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.242615 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/43111b18-562c-46e1-be8e-56ed79f40d3b-apiservice-cert\") pod \"packageserver-d55dfcdfc-sfmgx\" (UID: \"43111b18-562c-46e1-be8e-56ed79f40d3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.242718 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.243524 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b73b6008-1681-42fa-b5bb-771a022070d9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gcgb8\" (UID: \"b73b6008-1681-42fa-b5bb-771a022070d9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.258127 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqj8q\" (UniqueName: \"kubernetes.io/projected/3c2e9103-9425-4cbd-8bb6-acf4aa336228-kube-api-access-dqj8q\") pod \"oauth-openshift-558db77b4-lsc7z\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.262022 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8slbc\" (UniqueName: \"kubernetes.io/projected/dea92962-ec74-4c08-a114-63075ee610aa-kube-api-access-8slbc\") pod \"dns-default-l6pr5\" (UID: \"dea92962-ec74-4c08-a114-63075ee610aa\") " pod="openshift-dns/dns-default-l6pr5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.269050 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.272156 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wsl5j"] Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.281764 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nmn7\" (UniqueName: \"kubernetes.io/projected/ca13e433-706e-4733-97e9-5ef2af9d4d19-kube-api-access-8nmn7\") pod \"marketplace-operator-79b997595-574q9\" (UID: \"ca13e433-706e-4733-97e9-5ef2af9d4d19\") " pod="openshift-marketplace/marketplace-operator-79b997595-574q9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.282061 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-l6pr5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.290291 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.293563 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.294014 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm56d\" (UniqueName: \"kubernetes.io/projected/c9e722bd-c443-4cb6-8104-e630a4c0b58f-kube-api-access-hm56d\") pod \"control-plane-machine-set-operator-78cbb6b69f-2rd4s\" (UID: \"c9e722bd-c443-4cb6-8104-e630a4c0b58f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2rd4s" Jan 26 15:36:09 crc kubenswrapper[4713]: E0126 15:36:09.294548 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:09.794531864 +0000 UTC m=+144.931549099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.316760 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dnw7f" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.323094 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9p6\" (UniqueName: \"kubernetes.io/projected/9074ebe7-3cae-403d-8152-d99fbcbfdf2b-kube-api-access-zm9p6\") pod \"migrator-59844c95c7-n4756\" (UID: \"9074ebe7-3cae-403d-8152-d99fbcbfdf2b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n4756" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.339290 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtq2m\" (UniqueName: \"kubernetes.io/projected/e452036b-04a9-44f3-9401-e51bb17872cd-kube-api-access-mtq2m\") pod \"machine-config-server-clnp7\" (UID: \"e452036b-04a9-44f3-9401-e51bb17872cd\") " pod="openshift-machine-config-operator/machine-config-server-clnp7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.372870 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.373235 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj62s\" (UniqueName: \"kubernetes.io/projected/f8886930-6560-40e0-bb1f-4b63bfd27a39-kube-api-access-kj62s\") pod \"service-ca-9c57cc56f-9hkc7\" (UID: \"f8886930-6560-40e0-bb1f-4b63bfd27a39\") " pod="openshift-service-ca/service-ca-9c57cc56f-9hkc7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.381228 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dbft\" (UniqueName: \"kubernetes.io/projected/a9491684-a2f5-4ec9-a42b-7db8021c410f-kube-api-access-8dbft\") pod \"multus-admission-controller-857f4d67dd-6h7k9\" (UID: \"a9491684-a2f5-4ec9-a42b-7db8021c410f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6h7k9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.397000 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:09 crc kubenswrapper[4713]: E0126 15:36:09.398148 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:09.898126038 +0000 UTC m=+145.035143273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.425490 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2rd4s" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.456497 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xddf9\" (UniqueName: \"kubernetes.io/projected/43111b18-562c-46e1-be8e-56ed79f40d3b-kube-api-access-xddf9\") pod \"packageserver-d55dfcdfc-sfmgx\" (UID: \"43111b18-562c-46e1-be8e-56ed79f40d3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.457547 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n4756" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.473251 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5jr9\" (UniqueName: \"kubernetes.io/projected/7222f4f9-aa40-4909-a75e-70b5c1ef00fd-kube-api-access-m5jr9\") pod \"router-default-5444994796-lxzxj\" (UID: \"7222f4f9-aa40-4909-a75e-70b5c1ef00fd\") " pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.475440 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b73b6008-1681-42fa-b5bb-771a022070d9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gcgb8\" (UID: \"b73b6008-1681-42fa-b5bb-771a022070d9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.478506 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n4lh\" (UniqueName: \"kubernetes.io/projected/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-kube-api-access-7n4lh\") pod \"collect-profiles-29490690-wnmzb\" (UID: \"0afe1ab0-3817-4d66-aaf9-e99181ae0a55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.480164 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8zd5\" (UniqueName: \"kubernetes.io/projected/ca27415a-5c07-49c1-be23-8ab77740e240-kube-api-access-h8zd5\") pod \"csi-hostpathplugin-6vntj\" (UID: \"ca27415a-5c07-49c1-be23-8ab77740e240\") " pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.484181 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-574q9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.512784 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.513076 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6h7k9" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.517298 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:09 crc kubenswrapper[4713]: E0126 15:36:09.517743 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:10.017725547 +0000 UTC m=+145.154742772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.519418 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g99kb\" (UniqueName: \"kubernetes.io/projected/3200b97d-6535-4cfb-981c-aa18f461fff5-kube-api-access-g99kb\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jrzf\" (UID: \"3200b97d-6535-4cfb-981c-aa18f461fff5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.530296 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9hkc7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.538586 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b6sn\" (UniqueName: \"kubernetes.io/projected/f8681ae0-298b-45e5-bef9-4dcb591bd1ec-kube-api-access-5b6sn\") pod \"catalog-operator-68c6474976-k94zt\" (UID: \"f8681ae0-298b-45e5-bef9-4dcb591bd1ec\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.541655 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qw2v\" (UniqueName: \"kubernetes.io/projected/56af4faf-3bc5-4902-a06e-8e794a313d1c-kube-api-access-2qw2v\") pod \"package-server-manager-789f6589d5-qpq9h\" (UID: \"56af4faf-3bc5-4902-a06e-8e794a313d1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.548303 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.553256 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-466gm\" (UniqueName: \"kubernetes.io/projected/ea7b898f-b55e-47f8-ab80-8a425c57699b-kube-api-access-466gm\") pod \"ingress-canary-9c65z\" (UID: \"ea7b898f-b55e-47f8-ab80-8a425c57699b\") " pod="openshift-ingress-canary/ingress-canary-9c65z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.601316 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9c65z" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.601847 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-clnp7" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.602230 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6vntj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.619161 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:09 crc kubenswrapper[4713]: E0126 15:36:09.619505 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:10.119466819 +0000 UTC m=+145.256484064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.619830 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:09 crc kubenswrapper[4713]: E0126 15:36:09.620182 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:10.120166039 +0000 UTC m=+145.257183274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.636114 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqrns\" (UniqueName: \"kubernetes.io/projected/cddd7638-43a1-43c9-9e72-62790d9d4e87-kube-api-access-bqrns\") pod \"service-ca-operator-777779d784-z58gc\" (UID: \"cddd7638-43a1-43c9-9e72-62790d9d4e87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z58gc" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.652046 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsn77\" (UniqueName: \"kubernetes.io/projected/5c915437-e230-4e10-96d6-aa86c170f1b6-kube-api-access-xsn77\") pod \"olm-operator-6b444d44fb-kr92c\" (UID: \"5c915437-e230-4e10-96d6-aa86c170f1b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.681438 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" event={"ID":"619d9117-d7de-4088-a239-bcf1b3560380","Type":"ContainerStarted","Data":"4998308670c2e6ba0858ae0f2f902820f7a5b39a2cc7099326a7f34db67658cd"} Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.681512 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" event={"ID":"619d9117-d7de-4088-a239-bcf1b3560380","Type":"ContainerStarted","Data":"c526aa31877025173257f53e2cd05e8e4be2930dfd9e63a4fed6a0a87bb3feea"} Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.684031 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p5wsk" event={"ID":"adaaafc1-19f7-4240-bf6b-9c5c8adfa632","Type":"ContainerStarted","Data":"6b70c1ae4e3e388995ced7861385778517dee1d02db82241b95855910aa86f20"} Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.684063 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p5wsk" event={"ID":"adaaafc1-19f7-4240-bf6b-9c5c8adfa632","Type":"ContainerStarted","Data":"c431014469e6018a13f7d8415185d5f87f7bbb938ce6c8af3d5f2b615457b3c3"} Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.688982 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr" event={"ID":"9a8716a7-082f-463c-9a07-da822550f992","Type":"ContainerStarted","Data":"a4c264b74c3a0cbfc067b562832815295785d99bd9be0c823868e991c26a260f"} Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.689104 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr" event={"ID":"9a8716a7-082f-463c-9a07-da822550f992","Type":"ContainerStarted","Data":"502f8d7312d5af42eaa97fbb3eeebc6daabd64b00930dfadc1838885014be809"} Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.691574 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" event={"ID":"8814b03e-4835-4e4b-863b-acb4a7473f54","Type":"ContainerStarted","Data":"24c3c63340ce916a959d1ba4795d4f9a8a40047125166f147e0f1452f5a5e16d"} Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.691674 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" event={"ID":"8814b03e-4835-4e4b-863b-acb4a7473f54","Type":"ContainerStarted","Data":"307dd7de46587da497fd015dfa7137b6974604ece1500e3ce49bcd8cbb1079c6"} Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.693634 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" event={"ID":"d10aa23f-eb67-42dc-84b9-9489eeac389e","Type":"ContainerStarted","Data":"7201ba2e7dd9fd32a29bf4cb7fcabe0d07c8fbce180b15967366a8c9c58d362b"} Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.695223 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-55x6b" event={"ID":"9b229eeb-448b-4abe-9ba0-fe7dfc6e589e","Type":"ContainerStarted","Data":"009f0a7e7ac135036a23b146e141ff9a36ed250f0613565102e006b065fa5a2a"} Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.696564 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-55x6b" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.697333 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wsl5j" event={"ID":"ab3b4047-952e-4f97-afb6-b7418db3519d","Type":"ContainerStarted","Data":"b358afac3ef642e666fd2f75039eeb69f7ba09677cfb9adb4a99001694b87ef6"} Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.698304 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" event={"ID":"6b3f2a4f-1918-41bd-b81e-662f947d63d3","Type":"ContainerStarted","Data":"7eb9089576bbe7dbd3bb065522f81a5c575d7db8e652073e40619c3979897665"} Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.709938 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" event={"ID":"738df9e4-f531-420c-a4d6-2f3091d86068","Type":"ContainerStarted","Data":"fd8d7b04f50028108a09281c695b8485e27bfbe10201f62b5d343b0af70b451f"} Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.710016 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" event={"ID":"738df9e4-f531-420c-a4d6-2f3091d86068","Type":"ContainerStarted","Data":"7862502b1d737050040e92b33fae38d3b7e82d65c4651936a8435c5f7ee4b386"} Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.715170 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.715530 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.721091 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:09 crc kubenswrapper[4713]: E0126 15:36:09.721180 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:10.221156389 +0000 UTC m=+145.358173624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.725100 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:09 crc kubenswrapper[4713]: E0126 15:36:09.725959 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:10.225940525 +0000 UTC m=+145.362957760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.735803 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.752848 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.767796 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.798582 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.815270 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.834948 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.835485 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" Jan 26 15:36:09 crc kubenswrapper[4713]: E0126 15:36:09.835823 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:10.335790397 +0000 UTC m=+145.472807632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.836536 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:09 crc kubenswrapper[4713]: E0126 15:36:09.841310 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:10.341289403 +0000 UTC m=+145.478306828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.844382 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z58gc" Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.954566 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz"] Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.954717 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:09 crc kubenswrapper[4713]: E0126 15:36:09.954814 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:10.454788159 +0000 UTC m=+145.591805394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:09 crc kubenswrapper[4713]: I0126 15:36:09.955313 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:09 crc kubenswrapper[4713]: E0126 15:36:09.955770 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:10.455757596 +0000 UTC m=+145.592774821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.056462 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:10 crc kubenswrapper[4713]: E0126 15:36:10.056793 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:10.556760417 +0000 UTC m=+145.693777712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.056912 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:10 crc kubenswrapper[4713]: E0126 15:36:10.057605 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:10.55759443 +0000 UTC m=+145.694611665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.158506 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:10 crc kubenswrapper[4713]: E0126 15:36:10.158895 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:10.658876179 +0000 UTC m=+145.795893414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.194068 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f465s"] Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.241700 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5"] Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.281731 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:10 crc kubenswrapper[4713]: E0126 15:36:10.283150 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:10.78313066 +0000 UTC m=+145.920147895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.295489 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm"] Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.302404 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rcgql"] Jan 26 15:36:10 crc kubenswrapper[4713]: W0126 15:36:10.336718 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7222f4f9_aa40_4909_a75e_70b5c1ef00fd.slice/crio-8c4537b6038fec4f1771d876280d4655c48849c48729b9ef7635b49efd18cb5d WatchSource:0}: Error finding container 8c4537b6038fec4f1771d876280d4655c48849c48729b9ef7635b49efd18cb5d: Status 404 returned error can't find the container with id 8c4537b6038fec4f1771d876280d4655c48849c48729b9ef7635b49efd18cb5d Jan 26 15:36:10 crc kubenswrapper[4713]: W0126 15:36:10.380880 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05b2187e_ae3f_460d_8bd1_0d950c1e0535.slice/crio-346279a6584c6d8c34f6516fd3620703fcbba11cbb5f10ef2742c2c4207107b4 WatchSource:0}: Error finding container 346279a6584c6d8c34f6516fd3620703fcbba11cbb5f10ef2742c2c4207107b4: Status 404 returned error can't find the container with id 346279a6584c6d8c34f6516fd3620703fcbba11cbb5f10ef2742c2c4207107b4 Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.383224 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-7fwh2" podStartSLOduration=124.383200104 podStartE2EDuration="2m4.383200104s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:10.380660672 +0000 UTC m=+145.517677907" watchObservedRunningTime="2026-01-26 15:36:10.383200104 +0000 UTC m=+145.520217339" Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.384140 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:10 crc kubenswrapper[4713]: E0126 15:36:10.385453 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:10.885409337 +0000 UTC m=+146.022426572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:10 crc kubenswrapper[4713]: W0126 15:36:10.396507 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bd74e89_2dfb_4744_bf0c_7aedd0e799e0.slice/crio-36fa86bf3fd990a7d06483db343d3cab268c90e2631340463ec3562d7450ed59 WatchSource:0}: Error finding container 36fa86bf3fd990a7d06483db343d3cab268c90e2631340463ec3562d7450ed59: Status 404 returned error can't find the container with id 36fa86bf3fd990a7d06483db343d3cab268c90e2631340463ec3562d7450ed59 Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.425433 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mksjz" podStartSLOduration=124.425413884 podStartE2EDuration="2m4.425413884s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:10.423719346 +0000 UTC m=+145.560736571" watchObservedRunningTime="2026-01-26 15:36:10.425413884 +0000 UTC m=+145.562431119" Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.451833 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rfpbx"] Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.472168 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbskr" podStartSLOduration=124.472139162 podStartE2EDuration="2m4.472139162s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:10.470544236 +0000 UTC m=+145.607561471" watchObservedRunningTime="2026-01-26 15:36:10.472139162 +0000 UTC m=+145.609156397" Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.485647 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ss5h8"] Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.486000 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:10 crc kubenswrapper[4713]: E0126 15:36:10.486401 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:10.986384947 +0000 UTC m=+146.123402192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.569280 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kwb58" podStartSLOduration=123.569250642 podStartE2EDuration="2m3.569250642s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:10.508000081 +0000 UTC m=+145.645017326" watchObservedRunningTime="2026-01-26 15:36:10.569250642 +0000 UTC m=+145.706267877" Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.569566 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-kszgv" podStartSLOduration=124.569560421 podStartE2EDuration="2m4.569560421s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:10.56425993 +0000 UTC m=+145.701277165" watchObservedRunningTime="2026-01-26 15:36:10.569560421 +0000 UTC m=+145.706577656" Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.587678 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:10 crc kubenswrapper[4713]: E0126 15:36:10.588107 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:11.088087067 +0000 UTC m=+146.225104302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.607986 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-p5wsk" podStartSLOduration=124.607965302 podStartE2EDuration="2m4.607965302s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:10.605379488 +0000 UTC m=+145.742396723" watchObservedRunningTime="2026-01-26 15:36:10.607965302 +0000 UTC m=+145.744982537" Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.621253 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.689691 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:10 crc kubenswrapper[4713]: E0126 15:36:10.690089 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:11.190076715 +0000 UTC m=+146.327093950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.694396 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-55x6b" podStartSLOduration=124.694353287 podStartE2EDuration="2m4.694353287s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:10.693604656 +0000 UTC m=+145.830621881" watchObservedRunningTime="2026-01-26 15:36:10.694353287 +0000 UTC m=+145.831370522" Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.716040 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" event={"ID":"6eb2408a-c785-4784-9f65-a2fe7d218903","Type":"ContainerStarted","Data":"a47f474a5060abad6423898237c2251e7dfa0e25c700c5e371798468aac68291"} Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.716850 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" event={"ID":"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0","Type":"ContainerStarted","Data":"36fa86bf3fd990a7d06483db343d3cab268c90e2631340463ec3562d7450ed59"} Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.717851 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" event={"ID":"971b502e-8b71-404b-a7ca-58aa1894c648","Type":"ContainerStarted","Data":"24bd32ce13ccc550ba78318d3b5968a4497fa748a510b36c32f83bc562d0b456"} Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.718649 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-clnp7" event={"ID":"e452036b-04a9-44f3-9401-e51bb17872cd","Type":"ContainerStarted","Data":"4c8375838f721057b1b698f06f7b176db137b299a23ca2abd2297ec7725d120a"} Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.719779 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wsl5j" event={"ID":"ab3b4047-952e-4f97-afb6-b7418db3519d","Type":"ContainerStarted","Data":"8b78b225e37a823b568c86c549576c413a729f4cadc1cd2d15ca5c16f4b11809"} Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.720718 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" event={"ID":"d10aa23f-eb67-42dc-84b9-9489eeac389e","Type":"ContainerStarted","Data":"602584d7b0407738e7a01115d32cc17b0b00b9b819ba72bff976217eaf93c79f"} Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.721469 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" event={"ID":"99621db9-a20f-42b1-a788-a65ad55b6a52","Type":"ContainerStarted","Data":"18c0ff29c7068757529db809aa88dc778266dd5402a1dad0a9675e4d9c8060d7"} Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.722178 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lxzxj" event={"ID":"7222f4f9-aa40-4909-a75e-70b5c1ef00fd","Type":"ContainerStarted","Data":"8c4537b6038fec4f1771d876280d4655c48849c48729b9ef7635b49efd18cb5d"} Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.722851 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" event={"ID":"05b2187e-ae3f-460d-8bd1-0d950c1e0535","Type":"ContainerStarted","Data":"346279a6584c6d8c34f6516fd3620703fcbba11cbb5f10ef2742c2c4207107b4"} Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.724017 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz" event={"ID":"b058206c-e1d2-41d2-ae2f-c428ad49eea4","Type":"ContainerStarted","Data":"a9438c5aa52f780e621c2c939109528d7b4260c7c763a56b6e3d66f9b7c27c20"} Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.725153 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.725225 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.792534 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:10 crc kubenswrapper[4713]: E0126 15:36:10.793091 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:11.293071423 +0000 UTC m=+146.430088648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.793250 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:10 crc kubenswrapper[4713]: E0126 15:36:10.793542 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:11.293534326 +0000 UTC m=+146.430551561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.894988 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:10 crc kubenswrapper[4713]: E0126 15:36:10.896451 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:11.39642931 +0000 UTC m=+146.533446545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:10 crc kubenswrapper[4713]: I0126 15:36:10.901713 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:10 crc kubenswrapper[4713]: E0126 15:36:10.902005 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:11.401994208 +0000 UTC m=+146.539011443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.007036 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:11 crc kubenswrapper[4713]: E0126 15:36:11.007686 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:11.507667021 +0000 UTC m=+146.644684256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.110036 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:11 crc kubenswrapper[4713]: E0126 15:36:11.110485 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:11.610469993 +0000 UTC m=+146.747487228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.211639 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:11 crc kubenswrapper[4713]: E0126 15:36:11.212420 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:11.71240031 +0000 UTC m=+146.849417545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.318413 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:11 crc kubenswrapper[4713]: E0126 15:36:11.321261 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:11.821232413 +0000 UTC m=+146.958249648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.422217 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:11 crc kubenswrapper[4713]: E0126 15:36:11.422613 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:11.922589404 +0000 UTC m=+147.059606639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.422665 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:11 crc kubenswrapper[4713]: E0126 15:36:11.423471 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:11.923444918 +0000 UTC m=+147.060462153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.496807 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hxmkn"] Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.528838 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:11 crc kubenswrapper[4713]: E0126 15:36:11.529461 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:12.02943361 +0000 UTC m=+147.166450845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.633789 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-l6pr5"] Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.641678 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:11 crc kubenswrapper[4713]: E0126 15:36:11.642146 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:12.142131403 +0000 UTC m=+147.279148638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.653644 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9c65z"] Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.689882 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf"] Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.689939 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd"] Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.702179 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n4756"] Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.742408 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:11 crc kubenswrapper[4713]: E0126 15:36:11.742691 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:12.24265894 +0000 UTC m=+147.379676175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.742956 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:11 crc kubenswrapper[4713]: E0126 15:36:11.743432 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:12.243422352 +0000 UTC m=+147.380439587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.744579 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" event={"ID":"05b2187e-ae3f-460d-8bd1-0d950c1e0535","Type":"ContainerStarted","Data":"63554c78f6b5430d70467c850be74eac8f118a4a985090b0efb1c230e2431250"} Jan 26 15:36:11 crc kubenswrapper[4713]: W0126 15:36:11.744884 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbda2ea83_a2b5_4d40_8362_6db587054562.slice/crio-5290a4dbd0caec5dca26d1f6b97aa6c0b67ee15fdcee7f82a1a124fad6a93af3 WatchSource:0}: Error finding container 5290a4dbd0caec5dca26d1f6b97aa6c0b67ee15fdcee7f82a1a124fad6a93af3: Status 404 returned error can't find the container with id 5290a4dbd0caec5dca26d1f6b97aa6c0b67ee15fdcee7f82a1a124fad6a93af3 Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.750790 4713 generic.go:334] "Generic (PLEG): container finished" podID="3bd74e89-2dfb-4744-bf0c-7aedd0e799e0" containerID="93a635e2a063481296eab2147062d4e1858322f1b46bf19acea8fb3062a175bd" exitCode=0 Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.751088 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" event={"ID":"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0","Type":"ContainerDied","Data":"93a635e2a063481296eab2147062d4e1858322f1b46bf19acea8fb3062a175bd"} Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.761690 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" event={"ID":"9c219134-328d-4145-8dd2-3f01df03a055","Type":"ContainerStarted","Data":"e8af7b93789c610dabb1ee3191f02759f3ddfd69b2ef384560d04103e7a97ca0"} Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.761777 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" event={"ID":"9c219134-328d-4145-8dd2-3f01df03a055","Type":"ContainerStarted","Data":"e119e933f70503022f27b736ba62cc748be2883e1f4a47a4d92384e03b362d4f"} Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.769300 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" event={"ID":"d10aa23f-eb67-42dc-84b9-9489eeac389e","Type":"ContainerStarted","Data":"0cbb401a515706c22ce88d097ff8a8662251fd4d5ef4c566e96c01c321986153"} Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.777247 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hxmkn" event={"ID":"ab00e40d-3300-4351-89df-203b1bf11d72","Type":"ContainerStarted","Data":"51e7b98875aeebe8582a878b3eeec472743156681c2c1547ceb694a8c4c8ec14"} Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.845332 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.848987 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wsl5j" event={"ID":"ab3b4047-952e-4f97-afb6-b7418db3519d","Type":"ContainerStarted","Data":"ae91abf9cf94369e304c4c122061a1f01d7b04a79c4f2b8578ec941a225c5714"} Jan 26 15:36:11 crc kubenswrapper[4713]: E0126 15:36:11.851703 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:12.351673598 +0000 UTC m=+147.488690843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.852867 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:11 crc kubenswrapper[4713]: E0126 15:36:11.856677 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:12.35666335 +0000 UTC m=+147.493680585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.860308 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" event={"ID":"971b502e-8b71-404b-a7ca-58aa1894c648","Type":"ContainerStarted","Data":"284c9b4b58910547f1170f8604462c3482da82a14c65d9f79c0e5afd9471f86e"} Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.861577 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.873917 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qb8bs" podStartSLOduration=125.8738955 podStartE2EDuration="2m5.8738955s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:11.871861392 +0000 UTC m=+147.008878627" watchObservedRunningTime="2026-01-26 15:36:11.8738955 +0000 UTC m=+147.010912725" Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.876013 4713 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8r7k5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.876085 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" podUID="971b502e-8b71-404b-a7ca-58aa1894c648" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.880307 4713 generic.go:334] "Generic (PLEG): container finished" podID="6eb2408a-c785-4784-9f65-a2fe7d218903" containerID="db13e09861366eb26bc300ffb9e616154ddaf610005b29d11a51eb29e805bdbb" exitCode=0 Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.880386 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" event={"ID":"6eb2408a-c785-4784-9f65-a2fe7d218903","Type":"ContainerDied","Data":"db13e09861366eb26bc300ffb9e616154ddaf610005b29d11a51eb29e805bdbb"} Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.883851 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" event={"ID":"99621db9-a20f-42b1-a788-a65ad55b6a52","Type":"ContainerStarted","Data":"3267b474a914958a9a7705e6364ce71951bba78189ec25b6ddf9a8cbdbf39a61"} Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.884928 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.900790 4713 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rcgql container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.900855 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" podUID="99621db9-a20f-42b1-a788-a65ad55b6a52" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.922434 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-clnp7" event={"ID":"e452036b-04a9-44f3-9401-e51bb17872cd","Type":"ContainerStarted","Data":"7b8dc54cd761157d104910243e8afbbe18731b2c2693dd56eb3bcfce7a701800"} Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.936027 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lxzxj" event={"ID":"7222f4f9-aa40-4909-a75e-70b5c1ef00fd","Type":"ContainerStarted","Data":"53a7cf12ce64b5b787daa48ca09d12ea504d9fec5a6a1eb9f2d2c0da5866d86a"} Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.950282 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wsl5j" podStartSLOduration=125.95026412 podStartE2EDuration="2m5.95026412s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:11.949113197 +0000 UTC m=+147.086130432" watchObservedRunningTime="2026-01-26 15:36:11.95026412 +0000 UTC m=+147.087281355" Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.951416 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" podStartSLOduration=124.951410232 podStartE2EDuration="2m4.951410232s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:11.906770164 +0000 UTC m=+147.043787399" watchObservedRunningTime="2026-01-26 15:36:11.951410232 +0000 UTC m=+147.088427457" Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.953669 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:11 crc kubenswrapper[4713]: E0126 15:36:11.954308 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:12.454293895 +0000 UTC m=+147.591311130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.955175 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.959780 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz" event={"ID":"b058206c-e1d2-41d2-ae2f-c428ad49eea4","Type":"ContainerStarted","Data":"b42b5c825d6a6c8dfb43091a6adef3be82cc1c2a7858d62e54cb8b7bc20f32b6"} Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.959827 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.969015 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:36:11 crc kubenswrapper[4713]: E0126 15:36:11.962354 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:12.462336113 +0000 UTC m=+147.599353348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:11 crc kubenswrapper[4713]: I0126 15:36:11.990275 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dnw7f"] Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:11.996206 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-574q9"] Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.035520 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6vntj"] Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.057891 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6h7k9"] Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.050867 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" podStartSLOduration=126.050844048 podStartE2EDuration="2m6.050844048s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:11.971195825 +0000 UTC m=+147.108213060" watchObservedRunningTime="2026-01-26 15:36:12.050844048 +0000 UTC m=+147.187861283" Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.072613 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:12 crc kubenswrapper[4713]: E0126 15:36:12.072967 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:12.572925186 +0000 UTC m=+147.709942421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.077759 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-clnp7" podStartSLOduration=7.077735883 podStartE2EDuration="7.077735883s" podCreationTimestamp="2026-01-26 15:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:11.997316657 +0000 UTC m=+147.134333892" watchObservedRunningTime="2026-01-26 15:36:12.077735883 +0000 UTC m=+147.214753118" Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.079593 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-z58gc"] Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.090430 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9hkc7"] Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.090497 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:12 crc kubenswrapper[4713]: E0126 15:36:12.093235 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:12.593211313 +0000 UTC m=+147.730228538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.110962 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7"] Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.132730 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2rd4s"] Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.150563 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5"] Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.152381 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-lxzxj" podStartSLOduration=126.152344742 podStartE2EDuration="2m6.152344742s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:12.046431813 +0000 UTC m=+147.183449048" watchObservedRunningTime="2026-01-26 15:36:12.152344742 +0000 UTC m=+147.289361977" Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.155575 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8"] Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.164165 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf"] Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.188861 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb"] Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.188903 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx"] Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.191860 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:12 crc kubenswrapper[4713]: E0126 15:36:12.192060 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:12.69202763 +0000 UTC m=+147.829044865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.192118 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.192520 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lsc7z"] Jan 26 15:36:12 crc kubenswrapper[4713]: E0126 15:36:12.197202 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:12.697179016 +0000 UTC m=+147.834196251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.233695 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h"] Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.263766 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c"] Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.295245 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:12 crc kubenswrapper[4713]: E0126 15:36:12.295852 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:12.795811409 +0000 UTC m=+147.932828654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:12 crc kubenswrapper[4713]: W0126 15:36:12.310515 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c915437_e230_4e10_96d6_aa86c170f1b6.slice/crio-aa71d2b1c70c0908939f54207b397cd8ac7a60dccc50cb86cb18a1dfce18b4d1 WatchSource:0}: Error finding container aa71d2b1c70c0908939f54207b397cd8ac7a60dccc50cb86cb18a1dfce18b4d1: Status 404 returned error can't find the container with id aa71d2b1c70c0908939f54207b397cd8ac7a60dccc50cb86cb18a1dfce18b4d1 Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.323668 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt"] Jan 26 15:36:12 crc kubenswrapper[4713]: W0126 15:36:12.345220 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8681ae0_298b_45e5_bef9_4dcb591bd1ec.slice/crio-10c099c236e4d8b41c276d3e1ad24e9761750e327bc97089af030cadbed94d2c WatchSource:0}: Error finding container 10c099c236e4d8b41c276d3e1ad24e9761750e327bc97089af030cadbed94d2c: Status 404 returned error can't find the container with id 10c099c236e4d8b41c276d3e1ad24e9761750e327bc97089af030cadbed94d2c Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.396858 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:12 crc kubenswrapper[4713]: E0126 15:36:12.397501 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:12.897401677 +0000 UTC m=+148.034418912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.498469 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:12 crc kubenswrapper[4713]: E0126 15:36:12.498822 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:12.998802339 +0000 UTC m=+148.135819574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.600064 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:12 crc kubenswrapper[4713]: E0126 15:36:12.600691 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.100676454 +0000 UTC m=+148.237693679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.701842 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:12 crc kubenswrapper[4713]: E0126 15:36:12.701990 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.201965893 +0000 UTC m=+148.338983128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.702113 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:12 crc kubenswrapper[4713]: E0126 15:36:12.702921 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.20290881 +0000 UTC m=+148.339926045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.753409 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.757575 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.757647 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.803102 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:12 crc kubenswrapper[4713]: E0126 15:36:12.804166 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.304146377 +0000 UTC m=+148.441163612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:12 crc kubenswrapper[4713]: I0126 15:36:12.909754 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:12 crc kubenswrapper[4713]: E0126 15:36:12.910313 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.410300204 +0000 UTC m=+148.547317439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.018031 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:13 crc kubenswrapper[4713]: E0126 15:36:13.018631 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.518599782 +0000 UTC m=+148.655617017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.018797 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:13 crc kubenswrapper[4713]: E0126 15:36:13.019252 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.51924465 +0000 UTC m=+148.656261885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.056632 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd" event={"ID":"68ee95fd-840e-47fe-8c69-aeef8cef6e80","Type":"ContainerStarted","Data":"c8f0b8c963038da3dd127d24a58e85a90b1a1c4aff612fe684bc533118c97027"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.056675 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd" event={"ID":"68ee95fd-840e-47fe-8c69-aeef8cef6e80","Type":"ContainerStarted","Data":"a2e81f5fedeacea31a30c00a49776f8c3a53010e2935ce700394cb22095eb0fc"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.059342 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" event={"ID":"3bd74e89-2dfb-4744-bf0c-7aedd0e799e0","Type":"ContainerStarted","Data":"a5c1c08d085a66f600688ed1ed34c94bb1637b5ba69b674d005ef189ea6e90d8"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.079450 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2tsvd" podStartSLOduration=127.07942915 podStartE2EDuration="2m7.07942915s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:13.078813003 +0000 UTC m=+148.215830238" watchObservedRunningTime="2026-01-26 15:36:13.07942915 +0000 UTC m=+148.216446385" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.098082 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h" event={"ID":"56af4faf-3bc5-4902-a06e-8e794a313d1c","Type":"ContainerStarted","Data":"de877febbe5a5ce38cbb170b72b28ad7ef4c2d59fa3ae50292f8016cfc898f37"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.100914 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9c65z" event={"ID":"ea7b898f-b55e-47f8-ab80-8a425c57699b","Type":"ContainerStarted","Data":"7631655cb15b3e147251126c70510c186acdb04332e5bc083d07297bd31a03e0"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.100943 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9c65z" event={"ID":"ea7b898f-b55e-47f8-ab80-8a425c57699b","Type":"ContainerStarted","Data":"be337a91f9c6bc8b23f1d7971d6ca732260703bb9e71cac268487a6f26d29a0b"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.108300 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7" event={"ID":"6cb5773d-d638-4e73-a955-b936c27c9d7f","Type":"ContainerStarted","Data":"20b723b315c01447bd07007864581b9ec304688346eebffcc5f5f3e3c85d4317"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.110126 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6vntj" event={"ID":"ca27415a-5c07-49c1-be23-8ab77740e240","Type":"ContainerStarted","Data":"0b353c491cefe9670555780c3861d93981d708ca04e1bdafe4de2e6ca6b25559"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.111656 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-574q9" event={"ID":"ca13e433-706e-4733-97e9-5ef2af9d4d19","Type":"ContainerStarted","Data":"b029de6ca58b4709cddbea8a57cabd65ba00ec5bbbfe281b134befc4a8afe312"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.120117 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:13 crc kubenswrapper[4713]: E0126 15:36:13.120456 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.620435846 +0000 UTC m=+148.757453081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.120605 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:13 crc kubenswrapper[4713]: E0126 15:36:13.120904 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.620895849 +0000 UTC m=+148.757913084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.131612 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" podStartSLOduration=126.131587882 podStartE2EDuration="2m6.131587882s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:13.13116393 +0000 UTC m=+148.268181165" watchObservedRunningTime="2026-01-26 15:36:13.131587882 +0000 UTC m=+148.268605117" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.139286 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dnw7f" event={"ID":"adfa00ba-2415-46e4-b252-dbe5a74ab837","Type":"ContainerStarted","Data":"4c83cf4953827ba0a24db2f9a5bc3e17fd87b469c95c06def6875434a79ceb22"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.143383 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9hkc7" event={"ID":"f8886930-6560-40e0-bb1f-4b63bfd27a39","Type":"ContainerStarted","Data":"bbc1c645bbcce247cd8d7899204f53b4f8f78ca85f8d2c9987185f781c6a9bd2"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.155054 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l6pr5" event={"ID":"dea92962-ec74-4c08-a114-63075ee610aa","Type":"ContainerStarted","Data":"4b685f5235920e2abd6703eb0f4b62904ab0535c96484d780c9826f190b1b7b1"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.155110 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l6pr5" event={"ID":"dea92962-ec74-4c08-a114-63075ee610aa","Type":"ContainerStarted","Data":"e4d0d4055c955c2e37506b37da63f1156855a0796c819e130c51902175e05ea0"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.174505 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9c65z" podStartSLOduration=8.174481501 podStartE2EDuration="8.174481501s" podCreationTimestamp="2026-01-26 15:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:13.16737918 +0000 UTC m=+148.304396415" watchObservedRunningTime="2026-01-26 15:36:13.174481501 +0000 UTC m=+148.311498726" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.215901 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6h7k9" event={"ID":"a9491684-a2f5-4ec9-a42b-7db8021c410f","Type":"ContainerStarted","Data":"be3c3368b2f211df590810d67e1e31956f28c944371a360fd5b89092726e5bbf"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.223390 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:13 crc kubenswrapper[4713]: E0126 15:36:13.223944 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.723920206 +0000 UTC m=+148.860937441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.297612 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n4756" event={"ID":"9074ebe7-3cae-403d-8152-d99fbcbfdf2b","Type":"ContainerStarted","Data":"4ce7c8da5a672fd20c4f929001a45b35cb9f307c7fdc678a4e4af7669a69670b"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.297704 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n4756" event={"ID":"9074ebe7-3cae-403d-8152-d99fbcbfdf2b","Type":"ContainerStarted","Data":"52679fcacc5ada8e83222690ee5f1593965a057caa858887fd5b2cd9e11a46ad"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.307471 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" event={"ID":"3c2e9103-9425-4cbd-8bb6-acf4aa336228","Type":"ContainerStarted","Data":"961bd20a7dd186c2344da263db19ce430de5645816a26beb3278878767445df7"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.310550 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz" event={"ID":"b058206c-e1d2-41d2-ae2f-c428ad49eea4","Type":"ContainerStarted","Data":"db6b0caa1113c59d77bc64af13f4c4ad8d5ab477275ac09d4dba0ae43a91954c"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.316534 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hxmkn" event={"ID":"ab00e40d-3300-4351-89df-203b1bf11d72","Type":"ContainerStarted","Data":"1b989d5154bc2879e982dfedc901be067d5c1a615d937fab24f166912ecb8690"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.316759 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.331564 4713 patch_prober.go:28] interesting pod/console-operator-58897d9998-hxmkn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.336772 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hxmkn" podUID="ab00e40d-3300-4351-89df-203b1bf11d72" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.334207 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bszqz" podStartSLOduration=126.33418318 podStartE2EDuration="2m6.33418318s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:13.330804784 +0000 UTC m=+148.467822019" watchObservedRunningTime="2026-01-26 15:36:13.33418318 +0000 UTC m=+148.471200415" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.332162 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:13 crc kubenswrapper[4713]: E0126 15:36:13.393860 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.893834595 +0000 UTC m=+149.030851830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:13 crc kubenswrapper[4713]: E0126 15:36:13.456769 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:13.956709382 +0000 UTC m=+149.093726617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.451967 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.460394 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:13 crc kubenswrapper[4713]: E0126 15:36:13.508901 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:14.008882214 +0000 UTC m=+149.145899449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.511523 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf" event={"ID":"3200b97d-6535-4cfb-981c-aa18f461fff5","Type":"ContainerStarted","Data":"b29fbb65cb53802e82fcf6cf08bb64a74fc3e282e132cb09ee35711e0a04ce59"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.543652 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-hxmkn" podStartSLOduration=127.543622422 podStartE2EDuration="2m7.543622422s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:13.428616853 +0000 UTC m=+148.565634088" watchObservedRunningTime="2026-01-26 15:36:13.543622422 +0000 UTC m=+148.680639657" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.562524 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:13 crc kubenswrapper[4713]: E0126 15:36:13.562963 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:14.062946611 +0000 UTC m=+149.199963846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.591173 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z58gc" event={"ID":"cddd7638-43a1-43c9-9e72-62790d9d4e87","Type":"ContainerStarted","Data":"4b937468e45ede5cbed9e54e3a647a8c681fe790428374b1a304feb0980e71bc"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.591218 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z58gc" event={"ID":"cddd7638-43a1-43c9-9e72-62790d9d4e87","Type":"ContainerStarted","Data":"b94bdd91ab16d246a1e5956b2d628984026d3d6e1dfb06984735a31c6168eb29"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.609502 4713 generic.go:334] "Generic (PLEG): container finished" podID="05b2187e-ae3f-460d-8bd1-0d950c1e0535" containerID="63554c78f6b5430d70467c850be74eac8f118a4a985090b0efb1c230e2431250" exitCode=0 Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.610969 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" event={"ID":"05b2187e-ae3f-460d-8bd1-0d950c1e0535","Type":"ContainerDied","Data":"63554c78f6b5430d70467c850be74eac8f118a4a985090b0efb1c230e2431250"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.627012 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z58gc" podStartSLOduration=126.626992381 podStartE2EDuration="2m6.626992381s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:13.62698291 +0000 UTC m=+148.764000135" watchObservedRunningTime="2026-01-26 15:36:13.626992381 +0000 UTC m=+148.764009616" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.630178 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8" event={"ID":"b73b6008-1681-42fa-b5bb-771a022070d9","Type":"ContainerStarted","Data":"c0fd216483d9b5941f75d341e04f1acef8bed603eb168a3061c60f0874062271"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.677284 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:13 crc kubenswrapper[4713]: E0126 15:36:13.678457 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:14.17835593 +0000 UTC m=+149.315373165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.679082 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2rd4s" event={"ID":"c9e722bd-c443-4cb6-8104-e630a4c0b58f","Type":"ContainerStarted","Data":"8f5022bcba49500787e53c36b96e2509cc9e706dff5feb63e4e95669fb1211b1"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.700536 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" event={"ID":"6eb2408a-c785-4784-9f65-a2fe7d218903","Type":"ContainerStarted","Data":"350b7e003c2c377ee27525ed8cdc0effb850dbea54bd45bbbff08b25eb152254"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.749125 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" event={"ID":"9c219134-328d-4145-8dd2-3f01df03a055","Type":"ContainerStarted","Data":"c8e622eb85555e4dd7389f2a87f98f181e0dd0aa5f5e12081c04124fadd99d33"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.752522 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" event={"ID":"0afe1ab0-3817-4d66-aaf9-e99181ae0a55","Type":"ContainerStarted","Data":"bf0859e31b85d07dfa3d282e089d3bbc370190e8495accdb9d64f9ac91cdd772"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.764277 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" event={"ID":"5c915437-e230-4e10-96d6-aa86c170f1b6","Type":"ContainerStarted","Data":"aa71d2b1c70c0908939f54207b397cd8ac7a60dccc50cb86cb18a1dfce18b4d1"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.766765 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.772763 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:13 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:13 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:13 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.772830 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.781150 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:13 crc kubenswrapper[4713]: E0126 15:36:13.781451 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:14.281342467 +0000 UTC m=+149.418359702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.781659 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.781704 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.781741 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:13 crc kubenswrapper[4713]: E0126 15:36:13.784675 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:14.284660192 +0000 UTC m=+149.421677417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.786436 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.797870 4713 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-kr92c container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.797928 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" podUID="5c915437-e230-4e10-96d6-aa86c170f1b6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.800975 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.805142 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2rd4s" podStartSLOduration=126.805120563 podStartE2EDuration="2m6.805120563s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:13.735568647 +0000 UTC m=+148.872585882" watchObservedRunningTime="2026-01-26 15:36:13.805120563 +0000 UTC m=+148.942137798" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.835059 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.889141 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" event={"ID":"f8681ae0-298b-45e5-bef9-4dcb591bd1ec","Type":"ContainerStarted","Data":"10c099c236e4d8b41c276d3e1ad24e9761750e327bc97089af030cadbed94d2c"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.889752 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:13 crc kubenswrapper[4713]: E0126 15:36:13.890746 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:14.390727806 +0000 UTC m=+149.527745041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.899376 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5h8" podStartSLOduration=126.899340841 podStartE2EDuration="2m6.899340841s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:13.806915464 +0000 UTC m=+148.943932699" watchObservedRunningTime="2026-01-26 15:36:13.899340841 +0000 UTC m=+149.036358076" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.899506 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" podStartSLOduration=126.899501915 podStartE2EDuration="2m6.899501915s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:13.896971993 +0000 UTC m=+149.033989228" watchObservedRunningTime="2026-01-26 15:36:13.899501915 +0000 UTC m=+149.036519150" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.919101 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" event={"ID":"bda2ea83-a2b5-4d40-8362-6db587054562","Type":"ContainerStarted","Data":"ee8ab74541729f1b85e3eddf662e9c6c47fb185a343f51ff8c891d6821b1cc70"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.919145 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" event={"ID":"bda2ea83-a2b5-4d40-8362-6db587054562","Type":"ContainerStarted","Data":"5290a4dbd0caec5dca26d1f6b97aa6c0b67ee15fdcee7f82a1a124fad6a93af3"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.919647 4713 csr.go:261] certificate signing request csr-8tqn2 is approved, waiting to be issued Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.937546 4713 csr.go:257] certificate signing request csr-8tqn2 is issued Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.944609 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" event={"ID":"43111b18-562c-46e1-be8e-56ed79f40d3b","Type":"ContainerStarted","Data":"5782dcbb53665fdf2ec7c2a14e142386952e2129ea740c745bd59edc95e33cca"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.946224 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.952477 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5" event={"ID":"881498d7-4eaa-4654-8c22-61b0060761c0","Type":"ContainerStarted","Data":"e5d2332c56c268df7d1b27a40894c0ba94836c516fe1bff3a3df0162e1571dca"} Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.954564 4713 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rcgql container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.954628 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" podUID="99621db9-a20f-42b1-a788-a65ad55b6a52" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.964690 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntrqf" podStartSLOduration=127.964656697 podStartE2EDuration="2m7.964656697s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:13.964636507 +0000 UTC m=+149.101653742" watchObservedRunningTime="2026-01-26 15:36:13.964656697 +0000 UTC m=+149.101673932" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.991483 4713 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sfmgx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.991545 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" podUID="43111b18-562c-46e1-be8e-56ed79f40d3b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.992742 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.992817 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.992847 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:13 crc kubenswrapper[4713]: I0126 15:36:13.997282 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:13 crc kubenswrapper[4713]: E0126 15:36:13.998763 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:14.498746696 +0000 UTC m=+149.635763931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.005518 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" podStartSLOduration=127.005500488 podStartE2EDuration="2m7.005500488s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:14.00383411 +0000 UTC m=+149.140851345" watchObservedRunningTime="2026-01-26 15:36:14.005500488 +0000 UTC m=+149.142517723" Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.006781 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.049304 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.094379 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:14 crc kubenswrapper[4713]: E0126 15:36:14.095875 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:14.595841465 +0000 UTC m=+149.732858700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.156592 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.171768 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.199319 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:14 crc kubenswrapper[4713]: E0126 15:36:14.199723 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:14.699708537 +0000 UTC m=+149.836725772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.319090 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:14 crc kubenswrapper[4713]: E0126 15:36:14.319614 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:14.819585994 +0000 UTC m=+149.956603229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.425571 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:14 crc kubenswrapper[4713]: E0126 15:36:14.426088 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:14.9260736 +0000 UTC m=+150.063090835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.531419 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:14 crc kubenswrapper[4713]: E0126 15:36:14.531725 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:15.031706502 +0000 UTC m=+150.168723737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.633390 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:14 crc kubenswrapper[4713]: E0126 15:36:14.634037 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:15.133997569 +0000 UTC m=+150.271014804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.734768 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:14 crc kubenswrapper[4713]: E0126 15:36:14.734899 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:15.234877856 +0000 UTC m=+150.371895081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.735103 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:14 crc kubenswrapper[4713]: E0126 15:36:14.735476 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:15.235467923 +0000 UTC m=+150.372485148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.762896 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:14 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:14 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:14 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.762971 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.836573 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:14 crc kubenswrapper[4713]: E0126 15:36:14.836971 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:15.336947907 +0000 UTC m=+150.473965152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.938857 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:14 crc kubenswrapper[4713]: E0126 15:36:14.939764 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:15.439744689 +0000 UTC m=+150.576761924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.945462 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 15:31:13 +0000 UTC, rotation deadline is 2026-10-15 23:55:49.238086222 +0000 UTC Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.945513 4713 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6296h19m34.292577459s for next certificate rotation Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.960427 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" event={"ID":"0afe1ab0-3817-4d66-aaf9-e99181ae0a55","Type":"ContainerStarted","Data":"174210d4dea3f0d359ad2fe2b7bd2ebb30c4dcf484dee93dfcbd5d19b469de0f"} Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.961639 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8" event={"ID":"b73b6008-1681-42fa-b5bb-771a022070d9","Type":"ContainerStarted","Data":"4b4fdf37d950aa12a8a1f5fedb45fb7ceefac835b0f703aba5795fdfc151c58e"} Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.964190 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5" event={"ID":"881498d7-4eaa-4654-8c22-61b0060761c0","Type":"ContainerStarted","Data":"db8c69618f581961942d87f966d9d62a214a17b3c0dcbcc6610b1ca207b0452a"} Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.968637 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6h7k9" event={"ID":"a9491684-a2f5-4ec9-a42b-7db8021c410f","Type":"ContainerStarted","Data":"05f27ccab4c2398ca58d64700fa07233f47463c1842a3c6d7e4da983939e4c97"} Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.970284 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n4756" event={"ID":"9074ebe7-3cae-403d-8152-d99fbcbfdf2b","Type":"ContainerStarted","Data":"d52ee00cc13dbb60cd7786041afe34cb04b7fef945039f4d6979553eb9d5b6b6"} Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.976200 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-574q9" event={"ID":"ca13e433-706e-4733-97e9-5ef2af9d4d19","Type":"ContainerStarted","Data":"1d4816464b0fa1f72dcdae22767efdaded9b700e5839f86c9e860fe6513e353e"} Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.976974 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-574q9" Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.981524 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9hkc7" event={"ID":"f8886930-6560-40e0-bb1f-4b63bfd27a39","Type":"ContainerStarted","Data":"8791d872f75c409332386cdd16b37781e3caddc4625d83c69a59387be93404d2"} Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.984119 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" podStartSLOduration=128.98410787 podStartE2EDuration="2m8.98410787s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:14.983674517 +0000 UTC m=+150.120691742" watchObservedRunningTime="2026-01-26 15:36:14.98410787 +0000 UTC m=+150.121125095" Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.988203 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7" event={"ID":"6cb5773d-d638-4e73-a955-b936c27c9d7f","Type":"ContainerStarted","Data":"a969dfccc0f5462f03a7eb98d6ac0bff90cd76e56c4aec270f35a2a8fac94764"} Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.997458 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" event={"ID":"5c915437-e230-4e10-96d6-aa86c170f1b6","Type":"ContainerStarted","Data":"c6b55f111d30570841ef306faa9ebf5c10e806178dd1884d87f8d200ecfad4a6"} Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.998559 4713 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-kr92c container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.998599 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" podUID="5c915437-e230-4e10-96d6-aa86c170f1b6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 26 15:36:14 crc kubenswrapper[4713]: I0126 15:36:14.999952 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9095b8af818936c5efe5bf4167476820c44abdb1541a642b506c42bdda6d3228"} Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.006337 4713 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-574q9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.006456 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-574q9" podUID="ca13e433-706e-4733-97e9-5ef2af9d4d19" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.006661 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf" event={"ID":"3200b97d-6535-4cfb-981c-aa18f461fff5","Type":"ContainerStarted","Data":"e38ad12894648542e5a815c63811f1383e91b188022234a7b59134f9d5a9388b"} Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.008041 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-9hkc7" podStartSLOduration=128.008005909 podStartE2EDuration="2m8.008005909s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:15.00205394 +0000 UTC m=+150.139071175" watchObservedRunningTime="2026-01-26 15:36:15.008005909 +0000 UTC m=+150.145023144" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.012060 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" event={"ID":"43111b18-562c-46e1-be8e-56ed79f40d3b","Type":"ContainerStarted","Data":"6ad9059601dd6990d0dafe088cee5352a6b195786d0ca3ff3f5dba4affbf230d"} Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.012916 4713 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sfmgx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.012973 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" podUID="43111b18-562c-46e1-be8e-56ed79f40d3b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.022779 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gcgb8" podStartSLOduration=129.022753068 podStartE2EDuration="2m9.022753068s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:15.020672229 +0000 UTC m=+150.157689484" watchObservedRunningTime="2026-01-26 15:36:15.022753068 +0000 UTC m=+150.159770303" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.023142 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2rd4s" event={"ID":"c9e722bd-c443-4cb6-8104-e630a4c0b58f","Type":"ContainerStarted","Data":"de627140ef6643db8bddd010e6a35ef9558e9b6bc7b6a39e164d48d37afc214b"} Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.037936 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" event={"ID":"f8681ae0-298b-45e5-bef9-4dcb591bd1ec","Type":"ContainerStarted","Data":"7a194e12a00ea228a0fccd63f69d4d90495039d98987dcecd5384b26ea984e62"} Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.038352 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.041004 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:15 crc kubenswrapper[4713]: E0126 15:36:15.041523 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:15.541479 +0000 UTC m=+150.678496235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.041730 4713 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-k94zt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.041786 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" podUID="f8681ae0-298b-45e5-bef9-4dcb591bd1ec" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.042394 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:15 crc kubenswrapper[4713]: E0126 15:36:15.043538 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:15.543518628 +0000 UTC m=+150.680535863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.051148 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h" event={"ID":"56af4faf-3bc5-4902-a06e-8e794a313d1c","Type":"ContainerStarted","Data":"fbcdc2c3ec50d4dcc0d967d0f6dda96820e0c3c29e0f2ee57b4a167ea32cf50a"} Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.051204 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h" event={"ID":"56af4faf-3bc5-4902-a06e-8e794a313d1c","Type":"ContainerStarted","Data":"f6541dc594707694c75e38af809ffe82149ef2b2f1550fd76fe407b3d151febc"} Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.063669 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dnw7f" event={"ID":"adfa00ba-2415-46e4-b252-dbe5a74ab837","Type":"ContainerStarted","Data":"c5ed15fba122ed75c0b8fc1820e866769ca07832afdb025d9d6a245758c8f550"} Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.068341 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" event={"ID":"3c2e9103-9425-4cbd-8bb6-acf4aa336228","Type":"ContainerStarted","Data":"45522409797d0be172d2047ddadaf6a7cc256e4bdf5f22eae3d6ace8ab1d2e0d"} Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.068397 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.069279 4713 patch_prober.go:28] interesting pod/console-operator-58897d9998-hxmkn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.069318 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hxmkn" podUID="ab00e40d-3300-4351-89df-203b1bf11d72" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.079854 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.083741 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qx8q5" podStartSLOduration=129.083722741 podStartE2EDuration="2m9.083722741s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:15.081459446 +0000 UTC m=+150.218476691" watchObservedRunningTime="2026-01-26 15:36:15.083722741 +0000 UTC m=+150.220739966" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.083879 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n4756" podStartSLOduration=128.083874235 podStartE2EDuration="2m8.083874235s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:15.04992915 +0000 UTC m=+150.186946385" watchObservedRunningTime="2026-01-26 15:36:15.083874235 +0000 UTC m=+150.220891470" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.087073 4713 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lsc7z container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.087123 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" podUID="3c2e9103-9425-4cbd-8bb6-acf4aa336228" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.105206 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-574q9" podStartSLOduration=128.105187401 podStartE2EDuration="2m8.105187401s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:15.10412357 +0000 UTC m=+150.241140815" watchObservedRunningTime="2026-01-26 15:36:15.105187401 +0000 UTC m=+150.242204636" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.143355 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:15 crc kubenswrapper[4713]: E0126 15:36:15.143667 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:15.643626793 +0000 UTC m=+150.780644038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.143871 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:15 crc kubenswrapper[4713]: E0126 15:36:15.148990 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:15.648967485 +0000 UTC m=+150.785984780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.254896 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:15 crc kubenswrapper[4713]: E0126 15:36:15.255967 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:15.755947295 +0000 UTC m=+150.892964530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.360086 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:15 crc kubenswrapper[4713]: E0126 15:36:15.360492 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:15.860470886 +0000 UTC m=+150.997488121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.369454 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plff7" podStartSLOduration=128.369432551 podStartE2EDuration="2m8.369432551s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:15.369266656 +0000 UTC m=+150.506283881" watchObservedRunningTime="2026-01-26 15:36:15.369432551 +0000 UTC m=+150.506449786" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.372616 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" podStartSLOduration=129.372606321 podStartE2EDuration="2m9.372606321s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:15.327741906 +0000 UTC m=+150.464759161" watchObservedRunningTime="2026-01-26 15:36:15.372606321 +0000 UTC m=+150.509623556" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.465016 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:15 crc kubenswrapper[4713]: E0126 15:36:15.465614 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:15.965578813 +0000 UTC m=+151.102596058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.487189 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" podStartSLOduration=128.487164227 podStartE2EDuration="2m8.487164227s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:15.436802605 +0000 UTC m=+150.573819850" watchObservedRunningTime="2026-01-26 15:36:15.487164227 +0000 UTC m=+150.624181462" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.569895 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:15 crc kubenswrapper[4713]: E0126 15:36:15.570564 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.070525626 +0000 UTC m=+151.207542871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.673562 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:15 crc kubenswrapper[4713]: E0126 15:36:15.674028 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.173986016 +0000 UTC m=+151.311003251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.674240 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:15 crc kubenswrapper[4713]: E0126 15:36:15.674731 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.174715277 +0000 UTC m=+151.311732512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.763923 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:15 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:15 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:15 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.764271 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.776104 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:15 crc kubenswrapper[4713]: E0126 15:36:15.776376 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.276332373 +0000 UTC m=+151.413349608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.776712 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:15 crc kubenswrapper[4713]: E0126 15:36:15.777325 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.277307481 +0000 UTC m=+151.414324716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.837706 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jrzf" podStartSLOduration=128.837685847 podStartE2EDuration="2m8.837685847s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:15.491099298 +0000 UTC m=+150.628116523" watchObservedRunningTime="2026-01-26 15:36:15.837685847 +0000 UTC m=+150.974703082" Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.883938 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:15 crc kubenswrapper[4713]: E0126 15:36:15.884135 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.384105256 +0000 UTC m=+151.521122481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.884221 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:15 crc kubenswrapper[4713]: E0126 15:36:15.884562 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.384547499 +0000 UTC m=+151.521564734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:15 crc kubenswrapper[4713]: I0126 15:36:15.985156 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:15 crc kubenswrapper[4713]: E0126 15:36:15.985593 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.48557568 +0000 UTC m=+151.622592915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.074814 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" event={"ID":"05b2187e-ae3f-460d-8bd1-0d950c1e0535","Type":"ContainerStarted","Data":"63250b1aba3017d03a5db12b9fe28dd14afe6e6d513586ca55d7d6a8e8f0e4f4"} Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.074972 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.078208 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" event={"ID":"6eb2408a-c785-4784-9f65-a2fe7d218903","Type":"ContainerStarted","Data":"f6448ba236209ffc3de78c7374232f848113e0fd280f6250958f6b450f12d197"} Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.080075 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9edc551abb8de26e894ee617aba91c10483690b4d782ae77b67f8ac9d8d6a4c2"} Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.080106 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"88fd4ff85e314dd5e3b2eb157ee4b9586dcb67c46608673350ff70eb1212ceb6"} Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.080758 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.083077 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6vntj" event={"ID":"ca27415a-5c07-49c1-be23-8ab77740e240","Type":"ContainerStarted","Data":"8cdcd29b5a8cb1b6f2f605ceb4a2ffc7d0dab9827f37177eef07faee69ac5b52"} Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.085073 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l6pr5" event={"ID":"dea92962-ec74-4c08-a114-63075ee610aa","Type":"ContainerStarted","Data":"bb8f008824d0e878e39492ac2148566c3ea57da8dc5c208e27ee9a28c475bcbc"} Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.086489 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:16 crc kubenswrapper[4713]: E0126 15:36:16.086908 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.586893419 +0000 UTC m=+151.723910664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.087079 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-l6pr5" Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.093500 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6h7k9" event={"ID":"a9491684-a2f5-4ec9-a42b-7db8021c410f","Type":"ContainerStarted","Data":"468cc974357be2161dabb2680f3c82cf31bef1e05b35500ae1cfefc3ad6afd31"} Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.096353 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"be812e23f96b19efe66346204aeb15e849044c033af531f3b4708e35d3d66d53"} Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.096430 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"54f1f39bab10b686f083f67ec144406a4a59960dbb55803af5fbe704d3553619"} Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.097835 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"03ece4cbc0587691d170ff5817434902c31769113d71e6927b5723b79b694a55"} Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.100280 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dnw7f" event={"ID":"adfa00ba-2415-46e4-b252-dbe5a74ab837","Type":"ContainerStarted","Data":"971e7996ece9572977aec7f6d78475764c4d72f9c595c3fce0255436842b8ae9"} Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.100911 4713 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lsc7z container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.100953 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" podUID="3c2e9103-9425-4cbd-8bb6-acf4aa336228" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.101197 4713 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sfmgx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.101252 4713 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-574q9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.101303 4713 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-kr92c container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.101304 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-574q9" podUID="ca13e433-706e-4733-97e9-5ef2af9d4d19" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.101322 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" podUID="5c915437-e230-4e10-96d6-aa86c170f1b6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.101254 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" podUID="43111b18-562c-46e1-be8e-56ed79f40d3b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.101436 4713 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-k94zt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.101473 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" podUID="f8681ae0-298b-45e5-bef9-4dcb591bd1ec" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.123055 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" podStartSLOduration=130.123019606 podStartE2EDuration="2m10.123019606s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:16.111817818 +0000 UTC m=+151.248835053" watchObservedRunningTime="2026-01-26 15:36:16.123019606 +0000 UTC m=+151.260036841" Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.187565 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:16 crc kubenswrapper[4713]: E0126 15:36:16.187819 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.687779527 +0000 UTC m=+151.824796762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.188306 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:16 crc kubenswrapper[4713]: E0126 15:36:16.192231 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.692210322 +0000 UTC m=+151.829227557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.231840 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-dnw7f" podStartSLOduration=130.231806278 podStartE2EDuration="2m10.231806278s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:16.209752471 +0000 UTC m=+151.346769706" watchObservedRunningTime="2026-01-26 15:36:16.231806278 +0000 UTC m=+151.368823513" Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.290429 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:16 crc kubenswrapper[4713]: E0126 15:36:16.290638 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.790600969 +0000 UTC m=+151.927618204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.290868 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:16 crc kubenswrapper[4713]: E0126 15:36:16.291444 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.791424642 +0000 UTC m=+151.928441867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.394765 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:16 crc kubenswrapper[4713]: E0126 15:36:16.394999 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.894960115 +0000 UTC m=+152.031977350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.395090 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:16 crc kubenswrapper[4713]: E0126 15:36:16.395449 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.895436108 +0000 UTC m=+152.032453343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.495775 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:16 crc kubenswrapper[4713]: E0126 15:36:16.496016 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:16.995976976 +0000 UTC m=+152.132994211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.570884 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-l6pr5" podStartSLOduration=11.570865814 podStartE2EDuration="11.570865814s" podCreationTimestamp="2026-01-26 15:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:16.542841337 +0000 UTC m=+151.679858572" watchObservedRunningTime="2026-01-26 15:36:16.570865814 +0000 UTC m=+151.707883049" Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.571163 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" podStartSLOduration=130.571157722 podStartE2EDuration="2m10.571157722s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:16.480240788 +0000 UTC m=+151.617258023" watchObservedRunningTime="2026-01-26 15:36:16.571157722 +0000 UTC m=+151.708174957" Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.597918 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:16 crc kubenswrapper[4713]: E0126 15:36:16.598259 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:17.098245742 +0000 UTC m=+152.235262967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.699330 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:16 crc kubenswrapper[4713]: E0126 15:36:16.699550 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:17.19951717 +0000 UTC m=+152.336534405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.699657 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:16 crc kubenswrapper[4713]: E0126 15:36:16.700003 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:17.199989814 +0000 UTC m=+152.337007049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.756755 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:16 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:16 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:16 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.756833 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.800238 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:16 crc kubenswrapper[4713]: E0126 15:36:16.800496 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:17.300458299 +0000 UTC m=+152.437475534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.800647 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:16 crc kubenswrapper[4713]: E0126 15:36:16.800958 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:17.300943073 +0000 UTC m=+152.437960308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.854484 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-6h7k9" podStartSLOduration=129.854459244 podStartE2EDuration="2m9.854459244s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:16.831730508 +0000 UTC m=+151.968747733" watchObservedRunningTime="2026-01-26 15:36:16.854459244 +0000 UTC m=+151.991476479" Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.856720 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h" podStartSLOduration=129.856710508 podStartE2EDuration="2m9.856710508s" podCreationTimestamp="2026-01-26 15:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:16.679902533 +0000 UTC m=+151.816919768" watchObservedRunningTime="2026-01-26 15:36:16.856710508 +0000 UTC m=+151.993727743" Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.918531 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:16 crc kubenswrapper[4713]: E0126 15:36:16.918717 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:17.418684179 +0000 UTC m=+152.555701414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:16 crc kubenswrapper[4713]: I0126 15:36:16.918990 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:16 crc kubenswrapper[4713]: E0126 15:36:16.919404 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:17.419395409 +0000 UTC m=+152.556412644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.021541 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:17 crc kubenswrapper[4713]: E0126 15:36:17.040350 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:17.540313476 +0000 UTC m=+152.677330711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.106467 4713 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-574q9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.106529 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-574q9" podUID="ca13e433-706e-4733-97e9-5ef2af9d4d19" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.106745 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h" Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.123639 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:17 crc kubenswrapper[4713]: E0126 15:36:17.124223 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:17.62419714 +0000 UTC m=+152.761214375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.224727 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.224764 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.224790 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.224827 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.224838 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:17 crc kubenswrapper[4713]: E0126 15:36:17.225006 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:17.724972984 +0000 UTC m=+152.861990219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.225503 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:17 crc kubenswrapper[4713]: E0126 15:36:17.225838 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:17.725830288 +0000 UTC m=+152.862847523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.326844 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:17 crc kubenswrapper[4713]: E0126 15:36:17.327099 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:17.827062465 +0000 UTC m=+152.964079700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.327162 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:17 crc kubenswrapper[4713]: E0126 15:36:17.327528 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:17.827514028 +0000 UTC m=+152.964531263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.428271 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:17 crc kubenswrapper[4713]: E0126 15:36:17.428613 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:17.928594531 +0000 UTC m=+153.065611766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.475873 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.475934 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.478339 4713 patch_prober.go:28] interesting pod/console-f9d7485db-p5wsk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.478439 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-p5wsk" podUID="adaaafc1-19f7-4240-bf6b-9c5c8adfa632" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.533981 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:17 crc kubenswrapper[4713]: E0126 15:36:17.534377 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:18.034347276 +0000 UTC m=+153.171364511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.635599 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:17 crc kubenswrapper[4713]: E0126 15:36:17.636528 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:18.136492789 +0000 UTC m=+153.273510024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.737678 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:17 crc kubenswrapper[4713]: E0126 15:36:17.738227 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:18.23820318 +0000 UTC m=+153.375220475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.757763 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:17 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:17 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:17 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.757850 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.839495 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:17 crc kubenswrapper[4713]: E0126 15:36:17.839734 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:18.339696154 +0000 UTC m=+153.476713389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.839992 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:17 crc kubenswrapper[4713]: E0126 15:36:17.840336 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:18.340319522 +0000 UTC m=+153.477336757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.939397 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.939487 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.940837 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:17 crc kubenswrapper[4713]: E0126 15:36:17.941083 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:18.441048265 +0000 UTC m=+153.578065500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.941257 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:17 crc kubenswrapper[4713]: E0126 15:36:17.941641 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:18.441632971 +0000 UTC m=+153.578650206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:17 crc kubenswrapper[4713]: I0126 15:36:17.954436 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.042017 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.042319 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:18.542276732 +0000 UTC m=+153.679293977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.042656 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.044887 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:18.544874435 +0000 UTC m=+153.681891670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.094126 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.094543 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.131539 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mxfmm" Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.144115 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.144347 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:18.644324102 +0000 UTC m=+153.781341337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.144477 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.144857 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:18.644846757 +0000 UTC m=+153.781863992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.246108 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.246409 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:18.746371062 +0000 UTC m=+153.883388297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.247031 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.250563 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:18.750544991 +0000 UTC m=+153.887562226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.348879 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.349295 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:18.849275217 +0000 UTC m=+153.986292452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.456661 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.456998 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:18.956986828 +0000 UTC m=+154.094004063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.558403 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.558725 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:19.058706489 +0000 UTC m=+154.195723724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.663213 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.663559 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:19.163546798 +0000 UTC m=+154.300564023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.758143 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:18 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:18 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:18 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.758207 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.767178 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.767521 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:19.267483602 +0000 UTC m=+154.404500837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.767606 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.768025 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:19.268013817 +0000 UTC m=+154.405031122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.850804 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-hxmkn" Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.868998 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.869240 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:19.369203283 +0000 UTC m=+154.506220518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.869393 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.869805 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:19.36979756 +0000 UTC m=+154.506814795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.970556 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.970831 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:19.47078882 +0000 UTC m=+154.607806055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:18 crc kubenswrapper[4713]: I0126 15:36:18.970944 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:18 crc kubenswrapper[4713]: E0126 15:36:18.971340 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:19.471323975 +0000 UTC m=+154.608341210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.071849 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:19 crc kubenswrapper[4713]: E0126 15:36:19.072465 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:19.572445089 +0000 UTC m=+154.709462324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.135276 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6vntj" event={"ID":"ca27415a-5c07-49c1-be23-8ab77740e240","Type":"ContainerStarted","Data":"1cd504d57c5b81b7d4a6c118c6534b93b66add3acbb8ce2a63449d84e4d81dc6"} Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.135618 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6vntj" event={"ID":"ca27415a-5c07-49c1-be23-8ab77740e240","Type":"ContainerStarted","Data":"bbf6b23756d79aeea42e51c592e070c16c544ae9c680b30df9058d8eaebd1a78"} Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.173580 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:19 crc kubenswrapper[4713]: E0126 15:36:19.174016 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:19.673996725 +0000 UTC m=+154.811013950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.274460 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:19 crc kubenswrapper[4713]: E0126 15:36:19.274988 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:19.774950214 +0000 UTC m=+154.911967459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.283774 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.329894 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.331490 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.339229 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.339492 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.377393 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:19 crc kubenswrapper[4713]: E0126 15:36:19.378162 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:19.878144786 +0000 UTC m=+155.015162021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.379248 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.479195 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:19 crc kubenswrapper[4713]: E0126 15:36:19.479435 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:19.979402814 +0000 UTC m=+155.116420049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.479515 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d09873d-422d-4578-89aa-b2001a79ac16-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0d09873d-422d-4578-89aa-b2001a79ac16\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.479586 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.479613 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d09873d-422d-4578-89aa-b2001a79ac16-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0d09873d-422d-4578-89aa-b2001a79ac16\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:19 crc kubenswrapper[4713]: E0126 15:36:19.479932 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:19.979924398 +0000 UTC m=+155.116941633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.492593 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-574q9" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.517801 4713 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.536539 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sfmgx" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.580785 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.580988 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d09873d-422d-4578-89aa-b2001a79ac16-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0d09873d-422d-4578-89aa-b2001a79ac16\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.581053 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d09873d-422d-4578-89aa-b2001a79ac16-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0d09873d-422d-4578-89aa-b2001a79ac16\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:19 crc kubenswrapper[4713]: E0126 15:36:19.581497 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:20.081479615 +0000 UTC m=+155.218496850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.582385 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d09873d-422d-4578-89aa-b2001a79ac16-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0d09873d-422d-4578-89aa-b2001a79ac16\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.636243 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d09873d-422d-4578-89aa-b2001a79ac16-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0d09873d-422d-4578-89aa-b2001a79ac16\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.682169 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:19 crc kubenswrapper[4713]: E0126 15:36:19.682988 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:20.182947778 +0000 UTC m=+155.319965023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.693305 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.755933 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.784621 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:19 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:19 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:19 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.784688 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.785394 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:19 crc kubenswrapper[4713]: E0126 15:36:19.785769 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:20.2857347 +0000 UTC m=+155.422751935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.835229 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr92c" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.857929 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k94zt" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.887234 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:19 crc kubenswrapper[4713]: E0126 15:36:19.887651 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:20.387635496 +0000 UTC m=+155.524652731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j6h8x" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.903737 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4j4cb"] Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.904997 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.911428 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.942210 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4j4cb"] Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.985321 4713 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T15:36:19.517825066Z","Handler":null,"Name":""} Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.990900 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.991277 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7263c807-ae6d-4fd4-af54-8372275f5c9a-utilities\") pod \"certified-operators-4j4cb\" (UID: \"7263c807-ae6d-4fd4-af54-8372275f5c9a\") " pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.991427 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7263c807-ae6d-4fd4-af54-8372275f5c9a-catalog-content\") pod \"certified-operators-4j4cb\" (UID: \"7263c807-ae6d-4fd4-af54-8372275f5c9a\") " pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.991463 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67fhm\" (UniqueName: \"kubernetes.io/projected/7263c807-ae6d-4fd4-af54-8372275f5c9a-kube-api-access-67fhm\") pod \"certified-operators-4j4cb\" (UID: \"7263c807-ae6d-4fd4-af54-8372275f5c9a\") " pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:36:19 crc kubenswrapper[4713]: E0126 15:36:19.991915 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:20.491897629 +0000 UTC m=+155.628914864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.996525 4713 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 15:36:19 crc kubenswrapper[4713]: I0126 15:36:19.996609 4713 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.094540 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.094595 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7263c807-ae6d-4fd4-af54-8372275f5c9a-catalog-content\") pod \"certified-operators-4j4cb\" (UID: \"7263c807-ae6d-4fd4-af54-8372275f5c9a\") " pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.094626 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67fhm\" (UniqueName: \"kubernetes.io/projected/7263c807-ae6d-4fd4-af54-8372275f5c9a-kube-api-access-67fhm\") pod \"certified-operators-4j4cb\" (UID: \"7263c807-ae6d-4fd4-af54-8372275f5c9a\") " pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.094660 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7263c807-ae6d-4fd4-af54-8372275f5c9a-utilities\") pod \"certified-operators-4j4cb\" (UID: \"7263c807-ae6d-4fd4-af54-8372275f5c9a\") " pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.095180 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7263c807-ae6d-4fd4-af54-8372275f5c9a-utilities\") pod \"certified-operators-4j4cb\" (UID: \"7263c807-ae6d-4fd4-af54-8372275f5c9a\") " pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.095728 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7263c807-ae6d-4fd4-af54-8372275f5c9a-catalog-content\") pod \"certified-operators-4j4cb\" (UID: \"7263c807-ae6d-4fd4-af54-8372275f5c9a\") " pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.110719 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jd4ff"] Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.130666 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.130714 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.148612 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67fhm\" (UniqueName: \"kubernetes.io/projected/7263c807-ae6d-4fd4-af54-8372275f5c9a-kube-api-access-67fhm\") pod \"certified-operators-4j4cb\" (UID: \"7263c807-ae6d-4fd4-af54-8372275f5c9a\") " pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.154457 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jd4ff"] Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.154573 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.178739 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.224827 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6vntj" event={"ID":"ca27415a-5c07-49c1-be23-8ab77740e240","Type":"ContainerStarted","Data":"97283f58d6ad8bf4c0c2f626c6c88256fc190d8a3d44d786255fe9e015602542"} Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.262955 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.332631 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-476s2\" (UniqueName: \"kubernetes.io/projected/34325b63-2012-4f82-8860-c88e2847683b-kube-api-access-476s2\") pod \"community-operators-jd4ff\" (UID: \"34325b63-2012-4f82-8860-c88e2847683b\") " pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.332791 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34325b63-2012-4f82-8860-c88e2847683b-catalog-content\") pod \"community-operators-jd4ff\" (UID: \"34325b63-2012-4f82-8860-c88e2847683b\") " pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.332853 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34325b63-2012-4f82-8860-c88e2847683b-utilities\") pod \"community-operators-jd4ff\" (UID: \"34325b63-2012-4f82-8860-c88e2847683b\") " pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.352996 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cbs5g"] Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.354379 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.436147 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-476s2\" (UniqueName: \"kubernetes.io/projected/34325b63-2012-4f82-8860-c88e2847683b-kube-api-access-476s2\") pod \"community-operators-jd4ff\" (UID: \"34325b63-2012-4f82-8860-c88e2847683b\") " pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.436226 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34325b63-2012-4f82-8860-c88e2847683b-catalog-content\") pod \"community-operators-jd4ff\" (UID: \"34325b63-2012-4f82-8860-c88e2847683b\") " pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.436261 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34325b63-2012-4f82-8860-c88e2847683b-utilities\") pod \"community-operators-jd4ff\" (UID: \"34325b63-2012-4f82-8860-c88e2847683b\") " pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.436703 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34325b63-2012-4f82-8860-c88e2847683b-utilities\") pod \"community-operators-jd4ff\" (UID: \"34325b63-2012-4f82-8860-c88e2847683b\") " pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.437263 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34325b63-2012-4f82-8860-c88e2847683b-catalog-content\") pod \"community-operators-jd4ff\" (UID: \"34325b63-2012-4f82-8860-c88e2847683b\") " pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.451330 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6vntj" podStartSLOduration=15.451305545 podStartE2EDuration="15.451305545s" podCreationTimestamp="2026-01-26 15:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:20.402999432 +0000 UTC m=+155.540016667" watchObservedRunningTime="2026-01-26 15:36:20.451305545 +0000 UTC m=+155.588322780" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.454168 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cbs5g"] Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.505207 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x29hb"] Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.522429 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.538549 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02195b48-5845-4f33-861e-e6527590c4d9-catalog-content\") pod \"certified-operators-cbs5g\" (UID: \"02195b48-5845-4f33-861e-e6527590c4d9\") " pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.538595 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvcb9\" (UniqueName: \"kubernetes.io/projected/02195b48-5845-4f33-861e-e6527590c4d9-kube-api-access-vvcb9\") pod \"certified-operators-cbs5g\" (UID: \"02195b48-5845-4f33-861e-e6527590c4d9\") " pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.538661 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02195b48-5845-4f33-861e-e6527590c4d9-utilities\") pod \"certified-operators-cbs5g\" (UID: \"02195b48-5845-4f33-861e-e6527590c4d9\") " pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.545097 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x29hb"] Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.603376 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-476s2\" (UniqueName: \"kubernetes.io/projected/34325b63-2012-4f82-8860-c88e2847683b-kube-api-access-476s2\") pod \"community-operators-jd4ff\" (UID: \"34325b63-2012-4f82-8860-c88e2847683b\") " pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.640634 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-catalog-content\") pod \"community-operators-x29hb\" (UID: \"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6\") " pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.640701 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02195b48-5845-4f33-861e-e6527590c4d9-utilities\") pod \"certified-operators-cbs5g\" (UID: \"02195b48-5845-4f33-861e-e6527590c4d9\") " pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.640743 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj9w8\" (UniqueName: \"kubernetes.io/projected/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-kube-api-access-sj9w8\") pod \"community-operators-x29hb\" (UID: \"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6\") " pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.640784 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-utilities\") pod \"community-operators-x29hb\" (UID: \"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6\") " pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.640818 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02195b48-5845-4f33-861e-e6527590c4d9-catalog-content\") pod \"certified-operators-cbs5g\" (UID: \"02195b48-5845-4f33-861e-e6527590c4d9\") " pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.640848 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvcb9\" (UniqueName: \"kubernetes.io/projected/02195b48-5845-4f33-861e-e6527590c4d9-kube-api-access-vvcb9\") pod \"certified-operators-cbs5g\" (UID: \"02195b48-5845-4f33-861e-e6527590c4d9\") " pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.641318 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02195b48-5845-4f33-861e-e6527590c4d9-utilities\") pod \"certified-operators-cbs5g\" (UID: \"02195b48-5845-4f33-861e-e6527590c4d9\") " pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.641571 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02195b48-5845-4f33-861e-e6527590c4d9-catalog-content\") pod \"certified-operators-cbs5g\" (UID: \"02195b48-5845-4f33-861e-e6527590c4d9\") " pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.656270 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j6h8x\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.690191 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvcb9\" (UniqueName: \"kubernetes.io/projected/02195b48-5845-4f33-861e-e6527590c4d9-kube-api-access-vvcb9\") pod \"certified-operators-cbs5g\" (UID: \"02195b48-5845-4f33-861e-e6527590c4d9\") " pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.719939 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.749253 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.750673 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-catalog-content\") pod \"community-operators-x29hb\" (UID: \"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6\") " pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.750736 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj9w8\" (UniqueName: \"kubernetes.io/projected/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-kube-api-access-sj9w8\") pod \"community-operators-x29hb\" (UID: \"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6\") " pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.750780 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-utilities\") pod \"community-operators-x29hb\" (UID: \"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6\") " pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.751309 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-utilities\") pod \"community-operators-x29hb\" (UID: \"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6\") " pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.751634 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-catalog-content\") pod \"community-operators-x29hb\" (UID: \"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6\") " pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.775230 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:20 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:20 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:20 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.775291 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.778819 4713 patch_prober.go:28] interesting pod/apiserver-76f77b778f-rfpbx container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 15:36:20 crc kubenswrapper[4713]: [+]log ok Jan 26 15:36:20 crc kubenswrapper[4713]: [+]etcd ok Jan 26 15:36:20 crc kubenswrapper[4713]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 15:36:20 crc kubenswrapper[4713]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 15:36:20 crc kubenswrapper[4713]: [+]poststarthook/max-in-flight-filter ok Jan 26 15:36:20 crc kubenswrapper[4713]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 15:36:20 crc kubenswrapper[4713]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 15:36:20 crc kubenswrapper[4713]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 15:36:20 crc kubenswrapper[4713]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 15:36:20 crc kubenswrapper[4713]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 15:36:20 crc kubenswrapper[4713]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 15:36:20 crc kubenswrapper[4713]: [+]poststarthook/openshift.io-startinformers ok Jan 26 15:36:20 crc kubenswrapper[4713]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 15:36:20 crc kubenswrapper[4713]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 15:36:20 crc kubenswrapper[4713]: livez check failed Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.778906 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" podUID="6eb2408a-c785-4784-9f65-a2fe7d218903" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.786186 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj9w8\" (UniqueName: \"kubernetes.io/projected/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-kube-api-access-sj9w8\") pod \"community-operators-x29hb\" (UID: \"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6\") " pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.795748 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.800341 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.805790 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.870028 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:36:20 crc kubenswrapper[4713]: I0126 15:36:20.893558 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f465s" Jan 26 15:36:21 crc kubenswrapper[4713]: I0126 15:36:21.024786 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 15:36:21 crc kubenswrapper[4713]: I0126 15:36:21.235772 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4j4cb"] Jan 26 15:36:21 crc kubenswrapper[4713]: I0126 15:36:21.241380 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0d09873d-422d-4578-89aa-b2001a79ac16","Type":"ContainerStarted","Data":"3e9f950f117b67c00359140fd76d2813a70a845f82c9df09b7e7c0a14daf6dec"} Jan 26 15:36:21 crc kubenswrapper[4713]: I0126 15:36:21.328353 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cbs5g"] Jan 26 15:36:21 crc kubenswrapper[4713]: I0126 15:36:21.397149 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j6h8x"] Jan 26 15:36:21 crc kubenswrapper[4713]: I0126 15:36:21.471558 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x29hb"] Jan 26 15:36:21 crc kubenswrapper[4713]: W0126 15:36:21.487131 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cdc8e5a_b873_4cd2_aa55_377f7d19f6c6.slice/crio-8d155a5fa3292c1c581e289b2d7535bc937387fc79cc316fb714fbdadcc6fed8 WatchSource:0}: Error finding container 8d155a5fa3292c1c581e289b2d7535bc937387fc79cc316fb714fbdadcc6fed8: Status 404 returned error can't find the container with id 8d155a5fa3292c1c581e289b2d7535bc937387fc79cc316fb714fbdadcc6fed8 Jan 26 15:36:21 crc kubenswrapper[4713]: I0126 15:36:21.499722 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jd4ff"] Jan 26 15:36:21 crc kubenswrapper[4713]: I0126 15:36:21.757556 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:21 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:21 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:21 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:21 crc kubenswrapper[4713]: I0126 15:36:21.757860 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.721530 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.722545 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" event={"ID":"3e40f73a-b547-4c3f-a7a7-125032576150","Type":"ContainerStarted","Data":"d5d0021e4a978e34f609b28b960df750e646356160cb370a7dd831dce2a85660"} Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.722581 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dpdqx"] Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.724200 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dpdqx"] Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.724229 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5mg77"] Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.724505 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.725716 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5mg77"] Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.725767 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd4ff" event={"ID":"34325b63-2012-4f82-8860-c88e2847683b","Type":"ContainerStarted","Data":"c3b0baa95c1e5b1ef0eb501bbf93e82d99c063b566c13047843fcb9fdb55a79c"} Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.725795 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbs5g" event={"ID":"02195b48-5845-4f33-861e-e6527590c4d9","Type":"ContainerStarted","Data":"6d5b8e73875635358189fb9d767e328f0a37d5d1c4fb04230322343a8afdc72f"} Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.725810 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4j4cb" event={"ID":"7263c807-ae6d-4fd4-af54-8372275f5c9a","Type":"ContainerStarted","Data":"a0d63a49b3bc5aa4346af01a510d7f66456ce45ba4a5b2e6fcedeb521dca0076"} Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.725823 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x29hb" event={"ID":"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6","Type":"ContainerStarted","Data":"8d155a5fa3292c1c581e289b2d7535bc937387fc79cc316fb714fbdadcc6fed8"} Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.725891 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.729267 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.758038 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:22 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:22 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:22 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.758105 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.806318 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c9faca-c7e6-4016-b528-5a1da4deacd7-catalog-content\") pod \"redhat-marketplace-5mg77\" (UID: \"81c9faca-c7e6-4016-b528-5a1da4deacd7\") " pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.806422 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfg5k\" (UniqueName: \"kubernetes.io/projected/81c9faca-c7e6-4016-b528-5a1da4deacd7-kube-api-access-lfg5k\") pod \"redhat-marketplace-5mg77\" (UID: \"81c9faca-c7e6-4016-b528-5a1da4deacd7\") " pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.806455 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c9faca-c7e6-4016-b528-5a1da4deacd7-utilities\") pod \"redhat-marketplace-5mg77\" (UID: \"81c9faca-c7e6-4016-b528-5a1da4deacd7\") " pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.806492 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7259d39-ff96-407d-b595-119128ba5677-catalog-content\") pod \"redhat-marketplace-dpdqx\" (UID: \"d7259d39-ff96-407d-b595-119128ba5677\") " pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.806518 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7259d39-ff96-407d-b595-119128ba5677-utilities\") pod \"redhat-marketplace-dpdqx\" (UID: \"d7259d39-ff96-407d-b595-119128ba5677\") " pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.806542 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk7h9\" (UniqueName: \"kubernetes.io/projected/d7259d39-ff96-407d-b595-119128ba5677-kube-api-access-xk7h9\") pod \"redhat-marketplace-dpdqx\" (UID: \"d7259d39-ff96-407d-b595-119128ba5677\") " pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.908280 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c9faca-c7e6-4016-b528-5a1da4deacd7-catalog-content\") pod \"redhat-marketplace-5mg77\" (UID: \"81c9faca-c7e6-4016-b528-5a1da4deacd7\") " pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.908386 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfg5k\" (UniqueName: \"kubernetes.io/projected/81c9faca-c7e6-4016-b528-5a1da4deacd7-kube-api-access-lfg5k\") pod \"redhat-marketplace-5mg77\" (UID: \"81c9faca-c7e6-4016-b528-5a1da4deacd7\") " pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.908423 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c9faca-c7e6-4016-b528-5a1da4deacd7-utilities\") pod \"redhat-marketplace-5mg77\" (UID: \"81c9faca-c7e6-4016-b528-5a1da4deacd7\") " pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.908482 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7259d39-ff96-407d-b595-119128ba5677-catalog-content\") pod \"redhat-marketplace-dpdqx\" (UID: \"d7259d39-ff96-407d-b595-119128ba5677\") " pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.908505 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7259d39-ff96-407d-b595-119128ba5677-utilities\") pod \"redhat-marketplace-dpdqx\" (UID: \"d7259d39-ff96-407d-b595-119128ba5677\") " pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.908532 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk7h9\" (UniqueName: \"kubernetes.io/projected/d7259d39-ff96-407d-b595-119128ba5677-kube-api-access-xk7h9\") pod \"redhat-marketplace-dpdqx\" (UID: \"d7259d39-ff96-407d-b595-119128ba5677\") " pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.908919 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c9faca-c7e6-4016-b528-5a1da4deacd7-catalog-content\") pod \"redhat-marketplace-5mg77\" (UID: \"81c9faca-c7e6-4016-b528-5a1da4deacd7\") " pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.909232 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c9faca-c7e6-4016-b528-5a1da4deacd7-utilities\") pod \"redhat-marketplace-5mg77\" (UID: \"81c9faca-c7e6-4016-b528-5a1da4deacd7\") " pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.909268 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7259d39-ff96-407d-b595-119128ba5677-utilities\") pod \"redhat-marketplace-dpdqx\" (UID: \"d7259d39-ff96-407d-b595-119128ba5677\") " pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.909310 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7259d39-ff96-407d-b595-119128ba5677-catalog-content\") pod \"redhat-marketplace-dpdqx\" (UID: \"d7259d39-ff96-407d-b595-119128ba5677\") " pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.930103 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfg5k\" (UniqueName: \"kubernetes.io/projected/81c9faca-c7e6-4016-b528-5a1da4deacd7-kube-api-access-lfg5k\") pod \"redhat-marketplace-5mg77\" (UID: \"81c9faca-c7e6-4016-b528-5a1da4deacd7\") " pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:36:22 crc kubenswrapper[4713]: I0126 15:36:22.932165 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk7h9\" (UniqueName: \"kubernetes.io/projected/d7259d39-ff96-407d-b595-119128ba5677-kube-api-access-xk7h9\") pod \"redhat-marketplace-dpdqx\" (UID: \"d7259d39-ff96-407d-b595-119128ba5677\") " pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.054735 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.076593 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jpzjd"] Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.077726 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.079756 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.089480 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jpzjd"] Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.103001 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.111994 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-utilities\") pod \"redhat-operators-jpzjd\" (UID: \"2cfb6957-a47e-4a83-befa-dbfc6a986ee9\") " pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.112055 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb4hm\" (UniqueName: \"kubernetes.io/projected/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-kube-api-access-nb4hm\") pod \"redhat-operators-jpzjd\" (UID: \"2cfb6957-a47e-4a83-befa-dbfc6a986ee9\") " pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.112095 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-catalog-content\") pod \"redhat-operators-jpzjd\" (UID: \"2cfb6957-a47e-4a83-befa-dbfc6a986ee9\") " pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.113817 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-rfpbx" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.193242 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.214228 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-utilities\") pod \"redhat-operators-jpzjd\" (UID: \"2cfb6957-a47e-4a83-befa-dbfc6a986ee9\") " pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.214333 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb4hm\" (UniqueName: \"kubernetes.io/projected/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-kube-api-access-nb4hm\") pod \"redhat-operators-jpzjd\" (UID: \"2cfb6957-a47e-4a83-befa-dbfc6a986ee9\") " pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.214451 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-catalog-content\") pod \"redhat-operators-jpzjd\" (UID: \"2cfb6957-a47e-4a83-befa-dbfc6a986ee9\") " pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.216210 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-utilities\") pod \"redhat-operators-jpzjd\" (UID: \"2cfb6957-a47e-4a83-befa-dbfc6a986ee9\") " pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.216374 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-catalog-content\") pod \"redhat-operators-jpzjd\" (UID: \"2cfb6957-a47e-4a83-befa-dbfc6a986ee9\") " pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.245677 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb4hm\" (UniqueName: \"kubernetes.io/projected/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-kube-api-access-nb4hm\") pod \"redhat-operators-jpzjd\" (UID: \"2cfb6957-a47e-4a83-befa-dbfc6a986ee9\") " pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.327976 4713 generic.go:334] "Generic (PLEG): container finished" podID="34325b63-2012-4f82-8860-c88e2847683b" containerID="84b407651c8ce228dbb70b2db4d513529d9ce17be4c86dff1b0ca4032949e485" exitCode=0 Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.328089 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd4ff" event={"ID":"34325b63-2012-4f82-8860-c88e2847683b","Type":"ContainerDied","Data":"84b407651c8ce228dbb70b2db4d513529d9ce17be4c86dff1b0ca4032949e485"} Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.333901 4713 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.352238 4713 generic.go:334] "Generic (PLEG): container finished" podID="02195b48-5845-4f33-861e-e6527590c4d9" containerID="4d7b3d10647b8910b7342e698d4444751bacffb0c6892ad3aa5d91cb9b3a4b63" exitCode=0 Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.352307 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbs5g" event={"ID":"02195b48-5845-4f33-861e-e6527590c4d9","Type":"ContainerDied","Data":"4d7b3d10647b8910b7342e698d4444751bacffb0c6892ad3aa5d91cb9b3a4b63"} Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.355960 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0d09873d-422d-4578-89aa-b2001a79ac16","Type":"ContainerStarted","Data":"b1c99dcba0e6584cdbbddab61cd1315da5d4bcdfcbe6d60b9b543d5cefdbcda8"} Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.358551 4713 generic.go:334] "Generic (PLEG): container finished" podID="7263c807-ae6d-4fd4-af54-8372275f5c9a" containerID="5aae2e4cf432f96f0880e9fc16f2e81d58035888550fd797b80206cb7977de0c" exitCode=0 Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.358690 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4j4cb" event={"ID":"7263c807-ae6d-4fd4-af54-8372275f5c9a","Type":"ContainerDied","Data":"5aae2e4cf432f96f0880e9fc16f2e81d58035888550fd797b80206cb7977de0c"} Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.363737 4713 generic.go:334] "Generic (PLEG): container finished" podID="0afe1ab0-3817-4d66-aaf9-e99181ae0a55" containerID="174210d4dea3f0d359ad2fe2b7bd2ebb30c4dcf484dee93dfcbd5d19b469de0f" exitCode=0 Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.363826 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" event={"ID":"0afe1ab0-3817-4d66-aaf9-e99181ae0a55","Type":"ContainerDied","Data":"174210d4dea3f0d359ad2fe2b7bd2ebb30c4dcf484dee93dfcbd5d19b469de0f"} Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.397750 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" event={"ID":"3e40f73a-b547-4c3f-a7a7-125032576150","Type":"ContainerStarted","Data":"a5bdd99a7f029c52cd28a97dc2dcc96201d99ace395a15b5fbf017e25044cff6"} Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.398978 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.415967 4713 generic.go:334] "Generic (PLEG): container finished" podID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" containerID="743156198b7d32a0ad2259a82edf19f4fd896b3c33e78af4ddd75d6b8abbed6f" exitCode=0 Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.416872 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x29hb" event={"ID":"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6","Type":"ContainerDied","Data":"743156198b7d32a0ad2259a82edf19f4fd896b3c33e78af4ddd75d6b8abbed6f"} Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.481834 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" podStartSLOduration=137.48180214 podStartE2EDuration="2m17.48180214s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:23.451905181 +0000 UTC m=+158.588922426" watchObservedRunningTime="2026-01-26 15:36:23.48180214 +0000 UTC m=+158.618819375" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.490477 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pvkg2"] Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.495301 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.495278313 podStartE2EDuration="4.495278313s" podCreationTimestamp="2026-01-26 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:23.491460815 +0000 UTC m=+158.628478060" watchObservedRunningTime="2026-01-26 15:36:23.495278313 +0000 UTC m=+158.632295538" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.511657 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.513229 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.576143 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvkg2"] Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.632713 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b26921c6-11ce-4667-ad0c-bd7ff1366938-utilities\") pod \"redhat-operators-pvkg2\" (UID: \"b26921c6-11ce-4667-ad0c-bd7ff1366938\") " pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.637438 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv9pk\" (UniqueName: \"kubernetes.io/projected/b26921c6-11ce-4667-ad0c-bd7ff1366938-kube-api-access-kv9pk\") pod \"redhat-operators-pvkg2\" (UID: \"b26921c6-11ce-4667-ad0c-bd7ff1366938\") " pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.637858 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b26921c6-11ce-4667-ad0c-bd7ff1366938-catalog-content\") pod \"redhat-operators-pvkg2\" (UID: \"b26921c6-11ce-4667-ad0c-bd7ff1366938\") " pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.728824 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dpdqx"] Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.739609 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b26921c6-11ce-4667-ad0c-bd7ff1366938-catalog-content\") pod \"redhat-operators-pvkg2\" (UID: \"b26921c6-11ce-4667-ad0c-bd7ff1366938\") " pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.739665 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b26921c6-11ce-4667-ad0c-bd7ff1366938-utilities\") pod \"redhat-operators-pvkg2\" (UID: \"b26921c6-11ce-4667-ad0c-bd7ff1366938\") " pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.739691 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv9pk\" (UniqueName: \"kubernetes.io/projected/b26921c6-11ce-4667-ad0c-bd7ff1366938-kube-api-access-kv9pk\") pod \"redhat-operators-pvkg2\" (UID: \"b26921c6-11ce-4667-ad0c-bd7ff1366938\") " pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.740606 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b26921c6-11ce-4667-ad0c-bd7ff1366938-catalog-content\") pod \"redhat-operators-pvkg2\" (UID: \"b26921c6-11ce-4667-ad0c-bd7ff1366938\") " pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.740815 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b26921c6-11ce-4667-ad0c-bd7ff1366938-utilities\") pod \"redhat-operators-pvkg2\" (UID: \"b26921c6-11ce-4667-ad0c-bd7ff1366938\") " pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.760053 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:23 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:23 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:23 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.760142 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.767338 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv9pk\" (UniqueName: \"kubernetes.io/projected/b26921c6-11ce-4667-ad0c-bd7ff1366938-kube-api-access-kv9pk\") pod \"redhat-operators-pvkg2\" (UID: \"b26921c6-11ce-4667-ad0c-bd7ff1366938\") " pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.869143 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5mg77"] Jan 26 15:36:23 crc kubenswrapper[4713]: I0126 15:36:23.983015 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.013690 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.022633 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.041219 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.041509 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.043152 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.148299 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4d1aefe-f55a-4477-aac0-e6a9c543002d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a4d1aefe-f55a-4477-aac0-e6a9c543002d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.148398 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4d1aefe-f55a-4477-aac0-e6a9c543002d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a4d1aefe-f55a-4477-aac0-e6a9c543002d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.154161 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jpzjd"] Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.249277 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4d1aefe-f55a-4477-aac0-e6a9c543002d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a4d1aefe-f55a-4477-aac0-e6a9c543002d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.249418 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4d1aefe-f55a-4477-aac0-e6a9c543002d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a4d1aefe-f55a-4477-aac0-e6a9c543002d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.249838 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4d1aefe-f55a-4477-aac0-e6a9c543002d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a4d1aefe-f55a-4477-aac0-e6a9c543002d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.302308 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4d1aefe-f55a-4477-aac0-e6a9c543002d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a4d1aefe-f55a-4477-aac0-e6a9c543002d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.304045 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-l6pr5" Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.358058 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.454248 4713 generic.go:334] "Generic (PLEG): container finished" podID="0d09873d-422d-4578-89aa-b2001a79ac16" containerID="b1c99dcba0e6584cdbbddab61cd1315da5d4bcdfcbe6d60b9b543d5cefdbcda8" exitCode=0 Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.454334 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0d09873d-422d-4578-89aa-b2001a79ac16","Type":"ContainerDied","Data":"b1c99dcba0e6584cdbbddab61cd1315da5d4bcdfcbe6d60b9b543d5cefdbcda8"} Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.479455 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mg77" event={"ID":"81c9faca-c7e6-4016-b528-5a1da4deacd7","Type":"ContainerStarted","Data":"b5c6356a0583f0e503110859d3f03cfc893257533e1dbdebfc12891d26e276e8"} Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.479510 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mg77" event={"ID":"81c9faca-c7e6-4016-b528-5a1da4deacd7","Type":"ContainerStarted","Data":"9a4664e68e03eac6315daefe3a4686a8f6f75c6d4cabada310d005409c66220c"} Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.482259 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpzjd" event={"ID":"2cfb6957-a47e-4a83-befa-dbfc6a986ee9","Type":"ContainerStarted","Data":"9652d0c8be9e4999859bb3174a1c5cf058fd05c081e70407f8e970fdda85bc1a"} Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.483986 4713 generic.go:334] "Generic (PLEG): container finished" podID="d7259d39-ff96-407d-b595-119128ba5677" containerID="447bec5508f4e9b7b971d146f499c712197cbfac6066626e4a765ebf43fef0fe" exitCode=0 Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.484927 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpdqx" event={"ID":"d7259d39-ff96-407d-b595-119128ba5677","Type":"ContainerDied","Data":"447bec5508f4e9b7b971d146f499c712197cbfac6066626e4a765ebf43fef0fe"} Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.484947 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpdqx" event={"ID":"d7259d39-ff96-407d-b595-119128ba5677","Type":"ContainerStarted","Data":"d94549e467a8bc8429f2eeaa0f5268cd2337d28da17c8e9dd26bc37ac61cbddb"} Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.533925 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvkg2"] Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.757776 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:24 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:24 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:24 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.758142 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.765573 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 15:36:24 crc kubenswrapper[4713]: W0126 15:36:24.781021 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda4d1aefe_f55a_4477_aac0_e6a9c543002d.slice/crio-04b6bfa7a3c479378e2d644ea9c8168fb7434a1ec41e0ec284c6a8fc82a297a5 WatchSource:0}: Error finding container 04b6bfa7a3c479378e2d644ea9c8168fb7434a1ec41e0ec284c6a8fc82a297a5: Status 404 returned error can't find the container with id 04b6bfa7a3c479378e2d644ea9c8168fb7434a1ec41e0ec284c6a8fc82a297a5 Jan 26 15:36:24 crc kubenswrapper[4713]: I0126 15:36:24.902002 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.061139 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n4lh\" (UniqueName: \"kubernetes.io/projected/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-kube-api-access-7n4lh\") pod \"0afe1ab0-3817-4d66-aaf9-e99181ae0a55\" (UID: \"0afe1ab0-3817-4d66-aaf9-e99181ae0a55\") " Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.061836 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-config-volume\") pod \"0afe1ab0-3817-4d66-aaf9-e99181ae0a55\" (UID: \"0afe1ab0-3817-4d66-aaf9-e99181ae0a55\") " Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.061876 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-secret-volume\") pod \"0afe1ab0-3817-4d66-aaf9-e99181ae0a55\" (UID: \"0afe1ab0-3817-4d66-aaf9-e99181ae0a55\") " Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.062630 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-config-volume" (OuterVolumeSpecName: "config-volume") pod "0afe1ab0-3817-4d66-aaf9-e99181ae0a55" (UID: "0afe1ab0-3817-4d66-aaf9-e99181ae0a55"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.071430 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-kube-api-access-7n4lh" (OuterVolumeSpecName: "kube-api-access-7n4lh") pod "0afe1ab0-3817-4d66-aaf9-e99181ae0a55" (UID: "0afe1ab0-3817-4d66-aaf9-e99181ae0a55"). InnerVolumeSpecName "kube-api-access-7n4lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.072335 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0afe1ab0-3817-4d66-aaf9-e99181ae0a55" (UID: "0afe1ab0-3817-4d66-aaf9-e99181ae0a55"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.164508 4713 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.164540 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n4lh\" (UniqueName: \"kubernetes.io/projected/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-kube-api-access-7n4lh\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.164551 4713 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0afe1ab0-3817-4d66-aaf9-e99181ae0a55-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.490654 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a4d1aefe-f55a-4477-aac0-e6a9c543002d","Type":"ContainerStarted","Data":"04b6bfa7a3c479378e2d644ea9c8168fb7434a1ec41e0ec284c6a8fc82a297a5"} Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.494202 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" event={"ID":"0afe1ab0-3817-4d66-aaf9-e99181ae0a55","Type":"ContainerDied","Data":"bf0859e31b85d07dfa3d282e089d3bbc370190e8495accdb9d64f9ac91cdd772"} Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.494229 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf0859e31b85d07dfa3d282e089d3bbc370190e8495accdb9d64f9ac91cdd772" Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.494282 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb" Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.496028 4713 generic.go:334] "Generic (PLEG): container finished" podID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" containerID="0596c28793d0daae177b0e11f211271145095d73bce4d17009325d65d117f9ef" exitCode=0 Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.496100 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpzjd" event={"ID":"2cfb6957-a47e-4a83-befa-dbfc6a986ee9","Type":"ContainerDied","Data":"0596c28793d0daae177b0e11f211271145095d73bce4d17009325d65d117f9ef"} Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.507028 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvkg2" event={"ID":"b26921c6-11ce-4667-ad0c-bd7ff1366938","Type":"ContainerStarted","Data":"a0055e88063a1324d2a8502b0fba4082387b4b0284898fa8468a86f6fd961c8d"} Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.507088 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvkg2" event={"ID":"b26921c6-11ce-4667-ad0c-bd7ff1366938","Type":"ContainerStarted","Data":"c886224804d0d120265badaabd055e9396818b93d2ec5c4a11c662e548aa4e8b"} Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.509017 4713 generic.go:334] "Generic (PLEG): container finished" podID="81c9faca-c7e6-4016-b528-5a1da4deacd7" containerID="b5c6356a0583f0e503110859d3f03cfc893257533e1dbdebfc12891d26e276e8" exitCode=0 Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.509072 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mg77" event={"ID":"81c9faca-c7e6-4016-b528-5a1da4deacd7","Type":"ContainerDied","Data":"b5c6356a0583f0e503110859d3f03cfc893257533e1dbdebfc12891d26e276e8"} Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.759084 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:25 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:25 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:25 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:25 crc kubenswrapper[4713]: I0126 15:36:25.759632 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.126622 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.295696 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d09873d-422d-4578-89aa-b2001a79ac16-kube-api-access\") pod \"0d09873d-422d-4578-89aa-b2001a79ac16\" (UID: \"0d09873d-422d-4578-89aa-b2001a79ac16\") " Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.295808 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d09873d-422d-4578-89aa-b2001a79ac16-kubelet-dir\") pod \"0d09873d-422d-4578-89aa-b2001a79ac16\" (UID: \"0d09873d-422d-4578-89aa-b2001a79ac16\") " Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.295977 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d09873d-422d-4578-89aa-b2001a79ac16-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0d09873d-422d-4578-89aa-b2001a79ac16" (UID: "0d09873d-422d-4578-89aa-b2001a79ac16"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.296517 4713 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d09873d-422d-4578-89aa-b2001a79ac16-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.323226 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d09873d-422d-4578-89aa-b2001a79ac16-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0d09873d-422d-4578-89aa-b2001a79ac16" (UID: "0d09873d-422d-4578-89aa-b2001a79ac16"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.411811 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d09873d-422d-4578-89aa-b2001a79ac16-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.558662 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a4d1aefe-f55a-4477-aac0-e6a9c543002d","Type":"ContainerStarted","Data":"8e8c88bc1fc389213a401623d861a4a7953fa020148bef6107f5a4b3a84c69b6"} Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.565681 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0d09873d-422d-4578-89aa-b2001a79ac16","Type":"ContainerDied","Data":"3e9f950f117b67c00359140fd76d2813a70a845f82c9df09b7e7c0a14daf6dec"} Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.565727 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e9f950f117b67c00359140fd76d2813a70a845f82c9df09b7e7c0a14daf6dec" Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.565802 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.591280 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.59126308 podStartE2EDuration="2.59126308s" podCreationTimestamp="2026-01-26 15:36:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:26.588927934 +0000 UTC m=+161.725945199" watchObservedRunningTime="2026-01-26 15:36:26.59126308 +0000 UTC m=+161.728280315" Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.593264 4713 generic.go:334] "Generic (PLEG): container finished" podID="b26921c6-11ce-4667-ad0c-bd7ff1366938" containerID="a0055e88063a1324d2a8502b0fba4082387b4b0284898fa8468a86f6fd961c8d" exitCode=0 Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.593350 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvkg2" event={"ID":"b26921c6-11ce-4667-ad0c-bd7ff1366938","Type":"ContainerDied","Data":"a0055e88063a1324d2a8502b0fba4082387b4b0284898fa8468a86f6fd961c8d"} Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.762669 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:26 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:26 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:26 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:26 crc kubenswrapper[4713]: I0126 15:36:26.762744 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:26 crc kubenswrapper[4713]: E0126 15:36:26.903059 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-poda4d1aefe_f55a_4477_aac0_e6a9c543002d.slice/crio-conmon-8e8c88bc1fc389213a401623d861a4a7953fa020148bef6107f5a4b3a84c69b6.scope\": RecentStats: unable to find data in memory cache]" Jan 26 15:36:27 crc kubenswrapper[4713]: I0126 15:36:27.225103 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:36:27 crc kubenswrapper[4713]: I0126 15:36:27.225199 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:36:27 crc kubenswrapper[4713]: I0126 15:36:27.225890 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:36:27 crc kubenswrapper[4713]: I0126 15:36:27.225911 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:36:27 crc kubenswrapper[4713]: I0126 15:36:27.476487 4713 patch_prober.go:28] interesting pod/console-f9d7485db-p5wsk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 26 15:36:27 crc kubenswrapper[4713]: I0126 15:36:27.476578 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-p5wsk" podUID="adaaafc1-19f7-4240-bf6b-9c5c8adfa632" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 26 15:36:27 crc kubenswrapper[4713]: I0126 15:36:27.750656 4713 generic.go:334] "Generic (PLEG): container finished" podID="a4d1aefe-f55a-4477-aac0-e6a9c543002d" containerID="8e8c88bc1fc389213a401623d861a4a7953fa020148bef6107f5a4b3a84c69b6" exitCode=0 Jan 26 15:36:27 crc kubenswrapper[4713]: I0126 15:36:27.750732 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a4d1aefe-f55a-4477-aac0-e6a9c543002d","Type":"ContainerDied","Data":"8e8c88bc1fc389213a401623d861a4a7953fa020148bef6107f5a4b3a84c69b6"} Jan 26 15:36:27 crc kubenswrapper[4713]: I0126 15:36:27.759262 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:27 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:27 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:27 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:27 crc kubenswrapper[4713]: I0126 15:36:27.759314 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:28 crc kubenswrapper[4713]: I0126 15:36:28.757031 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:28 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:28 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:28 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:28 crc kubenswrapper[4713]: I0126 15:36:28.757611 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:29 crc kubenswrapper[4713]: I0126 15:36:29.331587 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:29 crc kubenswrapper[4713]: I0126 15:36:29.373428 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4d1aefe-f55a-4477-aac0-e6a9c543002d-kubelet-dir\") pod \"a4d1aefe-f55a-4477-aac0-e6a9c543002d\" (UID: \"a4d1aefe-f55a-4477-aac0-e6a9c543002d\") " Jan 26 15:36:29 crc kubenswrapper[4713]: I0126 15:36:29.373542 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4d1aefe-f55a-4477-aac0-e6a9c543002d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a4d1aefe-f55a-4477-aac0-e6a9c543002d" (UID: "a4d1aefe-f55a-4477-aac0-e6a9c543002d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:36:29 crc kubenswrapper[4713]: I0126 15:36:29.373659 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4d1aefe-f55a-4477-aac0-e6a9c543002d-kube-api-access\") pod \"a4d1aefe-f55a-4477-aac0-e6a9c543002d\" (UID: \"a4d1aefe-f55a-4477-aac0-e6a9c543002d\") " Jan 26 15:36:29 crc kubenswrapper[4713]: I0126 15:36:29.374060 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs\") pod \"network-metrics-daemon-4vgps\" (UID: \"6f185439-f527-44bf-8362-a9cf40e00d3c\") " pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:36:29 crc kubenswrapper[4713]: I0126 15:36:29.375389 4713 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4d1aefe-f55a-4477-aac0-e6a9c543002d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:29 crc kubenswrapper[4713]: I0126 15:36:29.380682 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d1aefe-f55a-4477-aac0-e6a9c543002d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a4d1aefe-f55a-4477-aac0-e6a9c543002d" (UID: "a4d1aefe-f55a-4477-aac0-e6a9c543002d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:36:29 crc kubenswrapper[4713]: I0126 15:36:29.393490 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f185439-f527-44bf-8362-a9cf40e00d3c-metrics-certs\") pod \"network-metrics-daemon-4vgps\" (UID: \"6f185439-f527-44bf-8362-a9cf40e00d3c\") " pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:36:29 crc kubenswrapper[4713]: I0126 15:36:29.473827 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vgps" Jan 26 15:36:29 crc kubenswrapper[4713]: I0126 15:36:29.476358 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4d1aefe-f55a-4477-aac0-e6a9c543002d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:29 crc kubenswrapper[4713]: I0126 15:36:29.758477 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:29 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:29 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:29 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:29 crc kubenswrapper[4713]: I0126 15:36:29.758563 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:29 crc kubenswrapper[4713]: I0126 15:36:29.849290 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:29 crc kubenswrapper[4713]: I0126 15:36:29.862010 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a4d1aefe-f55a-4477-aac0-e6a9c543002d","Type":"ContainerDied","Data":"04b6bfa7a3c479378e2d644ea9c8168fb7434a1ec41e0ec284c6a8fc82a297a5"} Jan 26 15:36:29 crc kubenswrapper[4713]: I0126 15:36:29.862707 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04b6bfa7a3c479378e2d644ea9c8168fb7434a1ec41e0ec284c6a8fc82a297a5" Jan 26 15:36:30 crc kubenswrapper[4713]: I0126 15:36:30.094095 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4vgps"] Jan 26 15:36:30 crc kubenswrapper[4713]: I0126 15:36:30.758595 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:30 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:30 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:30 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:30 crc kubenswrapper[4713]: I0126 15:36:30.759316 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:30 crc kubenswrapper[4713]: I0126 15:36:30.868209 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4vgps" event={"ID":"6f185439-f527-44bf-8362-a9cf40e00d3c","Type":"ContainerStarted","Data":"db663222f5081f22da4449255856bf9ad1696a448b53f0a3f2842cbc33106839"} Jan 26 15:36:31 crc kubenswrapper[4713]: I0126 15:36:31.890311 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4vgps" event={"ID":"6f185439-f527-44bf-8362-a9cf40e00d3c","Type":"ContainerStarted","Data":"ba7c23b3925ce6666c29fcd95d12e1ff505e2a012205650055bc4fd28e9a233d"} Jan 26 15:36:32 crc kubenswrapper[4713]: I0126 15:36:32.185097 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:32 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:32 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:32 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:32 crc kubenswrapper[4713]: I0126 15:36:32.185437 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:32 crc kubenswrapper[4713]: I0126 15:36:32.758075 4713 patch_prober.go:28] interesting pod/router-default-5444994796-lxzxj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:32 crc kubenswrapper[4713]: [-]has-synced failed: reason withheld Jan 26 15:36:32 crc kubenswrapper[4713]: [+]process-running ok Jan 26 15:36:32 crc kubenswrapper[4713]: healthz check failed Jan 26 15:36:32 crc kubenswrapper[4713]: I0126 15:36:32.758145 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lxzxj" podUID="7222f4f9-aa40-4909-a75e-70b5c1ef00fd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:33 crc kubenswrapper[4713]: I0126 15:36:33.310572 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:36:33 crc kubenswrapper[4713]: I0126 15:36:33.310668 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:36:33 crc kubenswrapper[4713]: I0126 15:36:33.766435 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:33 crc kubenswrapper[4713]: I0126 15:36:33.770858 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-lxzxj" Jan 26 15:36:34 crc kubenswrapper[4713]: I0126 15:36:34.006993 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4vgps" event={"ID":"6f185439-f527-44bf-8362-a9cf40e00d3c","Type":"ContainerStarted","Data":"bd336b757d78749e2b4ae42311cef96481762d79f94caf5c2bac94ec79aa8abc"} Jan 26 15:36:34 crc kubenswrapper[4713]: I0126 15:36:34.034695 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-4vgps" podStartSLOduration=148.03466783 podStartE2EDuration="2m28.03466783s" podCreationTimestamp="2026-01-26 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:34.033777015 +0000 UTC m=+169.170794250" watchObservedRunningTime="2026-01-26 15:36:34.03466783 +0000 UTC m=+169.171685075" Jan 26 15:36:37 crc kubenswrapper[4713]: I0126 15:36:37.225575 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:36:37 crc kubenswrapper[4713]: I0126 15:36:37.225655 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:36:37 crc kubenswrapper[4713]: I0126 15:36:37.226195 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:36:37 crc kubenswrapper[4713]: I0126 15:36:37.226214 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:36:37 crc kubenswrapper[4713]: I0126 15:36:37.226243 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-55x6b" Jan 26 15:36:37 crc kubenswrapper[4713]: I0126 15:36:37.226893 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"009f0a7e7ac135036a23b146e141ff9a36ed250f0613565102e006b065fa5a2a"} pod="openshift-console/downloads-7954f5f757-55x6b" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 26 15:36:37 crc kubenswrapper[4713]: I0126 15:36:37.226971 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" containerID="cri-o://009f0a7e7ac135036a23b146e141ff9a36ed250f0613565102e006b065fa5a2a" gracePeriod=2 Jan 26 15:36:37 crc kubenswrapper[4713]: I0126 15:36:37.227524 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:36:37 crc kubenswrapper[4713]: I0126 15:36:37.227550 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:36:37 crc kubenswrapper[4713]: I0126 15:36:37.475874 4713 patch_prober.go:28] interesting pod/console-f9d7485db-p5wsk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 26 15:36:37 crc kubenswrapper[4713]: I0126 15:36:37.476249 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-p5wsk" podUID="adaaafc1-19f7-4240-bf6b-9c5c8adfa632" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 26 15:36:38 crc kubenswrapper[4713]: I0126 15:36:38.106534 4713 generic.go:334] "Generic (PLEG): container finished" podID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerID="009f0a7e7ac135036a23b146e141ff9a36ed250f0613565102e006b065fa5a2a" exitCode=0 Jan 26 15:36:38 crc kubenswrapper[4713]: I0126 15:36:38.106612 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-55x6b" event={"ID":"9b229eeb-448b-4abe-9ba0-fe7dfc6e589e","Type":"ContainerDied","Data":"009f0a7e7ac135036a23b146e141ff9a36ed250f0613565102e006b065fa5a2a"} Jan 26 15:36:40 crc kubenswrapper[4713]: I0126 15:36:40.806052 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:36:47 crc kubenswrapper[4713]: I0126 15:36:47.227623 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:36:47 crc kubenswrapper[4713]: I0126 15:36:47.228139 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:36:47 crc kubenswrapper[4713]: I0126 15:36:47.479991 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:47 crc kubenswrapper[4713]: I0126 15:36:47.485168 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:36:49 crc kubenswrapper[4713]: I0126 15:36:49.823308 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qpq9h" Jan 26 15:36:54 crc kubenswrapper[4713]: I0126 15:36:54.162178 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:57 crc kubenswrapper[4713]: I0126 15:36:57.224887 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:36:57 crc kubenswrapper[4713]: I0126 15:36:57.224988 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.800991 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 15:37:00 crc kubenswrapper[4713]: E0126 15:37:00.801968 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0afe1ab0-3817-4d66-aaf9-e99181ae0a55" containerName="collect-profiles" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.801987 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="0afe1ab0-3817-4d66-aaf9-e99181ae0a55" containerName="collect-profiles" Jan 26 15:37:00 crc kubenswrapper[4713]: E0126 15:37:00.802003 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d09873d-422d-4578-89aa-b2001a79ac16" containerName="pruner" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.802012 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d09873d-422d-4578-89aa-b2001a79ac16" containerName="pruner" Jan 26 15:37:00 crc kubenswrapper[4713]: E0126 15:37:00.802025 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d1aefe-f55a-4477-aac0-e6a9c543002d" containerName="pruner" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.802034 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d1aefe-f55a-4477-aac0-e6a9c543002d" containerName="pruner" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.802164 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="0afe1ab0-3817-4d66-aaf9-e99181ae0a55" containerName="collect-profiles" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.802180 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d09873d-422d-4578-89aa-b2001a79ac16" containerName="pruner" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.802188 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d1aefe-f55a-4477-aac0-e6a9c543002d" containerName="pruner" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.802769 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.806941 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.807477 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.810287 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79d4343a-0689-4165-bfcf-f8c842163b3c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"79d4343a-0689-4165-bfcf-f8c842163b3c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.810450 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79d4343a-0689-4165-bfcf-f8c842163b3c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"79d4343a-0689-4165-bfcf-f8c842163b3c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.824442 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.913289 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79d4343a-0689-4165-bfcf-f8c842163b3c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"79d4343a-0689-4165-bfcf-f8c842163b3c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.913443 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79d4343a-0689-4165-bfcf-f8c842163b3c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"79d4343a-0689-4165-bfcf-f8c842163b3c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.913644 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79d4343a-0689-4165-bfcf-f8c842163b3c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"79d4343a-0689-4165-bfcf-f8c842163b3c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:00 crc kubenswrapper[4713]: I0126 15:37:00.935616 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79d4343a-0689-4165-bfcf-f8c842163b3c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"79d4343a-0689-4165-bfcf-f8c842163b3c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:01 crc kubenswrapper[4713]: I0126 15:37:01.141734 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:03 crc kubenswrapper[4713]: I0126 15:37:03.301516 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:37:03 crc kubenswrapper[4713]: I0126 15:37:03.302885 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:37:06 crc kubenswrapper[4713]: I0126 15:37:06.005629 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 15:37:06 crc kubenswrapper[4713]: I0126 15:37:06.008428 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:06 crc kubenswrapper[4713]: I0126 15:37:06.009471 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 15:37:06 crc kubenswrapper[4713]: I0126 15:37:06.189923 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36284b41-4184-472e-967c-f0345cf1ae81-kubelet-dir\") pod \"installer-9-crc\" (UID: \"36284b41-4184-472e-967c-f0345cf1ae81\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:06 crc kubenswrapper[4713]: I0126 15:37:06.190100 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36284b41-4184-472e-967c-f0345cf1ae81-var-lock\") pod \"installer-9-crc\" (UID: \"36284b41-4184-472e-967c-f0345cf1ae81\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:06 crc kubenswrapper[4713]: I0126 15:37:06.190345 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36284b41-4184-472e-967c-f0345cf1ae81-kube-api-access\") pod \"installer-9-crc\" (UID: \"36284b41-4184-472e-967c-f0345cf1ae81\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:06 crc kubenswrapper[4713]: I0126 15:37:06.291777 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36284b41-4184-472e-967c-f0345cf1ae81-var-lock\") pod \"installer-9-crc\" (UID: \"36284b41-4184-472e-967c-f0345cf1ae81\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:06 crc kubenswrapper[4713]: I0126 15:37:06.291895 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36284b41-4184-472e-967c-f0345cf1ae81-var-lock\") pod \"installer-9-crc\" (UID: \"36284b41-4184-472e-967c-f0345cf1ae81\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:06 crc kubenswrapper[4713]: I0126 15:37:06.291898 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36284b41-4184-472e-967c-f0345cf1ae81-kube-api-access\") pod \"installer-9-crc\" (UID: \"36284b41-4184-472e-967c-f0345cf1ae81\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:06 crc kubenswrapper[4713]: I0126 15:37:06.292180 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36284b41-4184-472e-967c-f0345cf1ae81-kubelet-dir\") pod \"installer-9-crc\" (UID: \"36284b41-4184-472e-967c-f0345cf1ae81\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:06 crc kubenswrapper[4713]: I0126 15:37:06.292270 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36284b41-4184-472e-967c-f0345cf1ae81-kubelet-dir\") pod \"installer-9-crc\" (UID: \"36284b41-4184-472e-967c-f0345cf1ae81\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:06 crc kubenswrapper[4713]: I0126 15:37:06.319285 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36284b41-4184-472e-967c-f0345cf1ae81-kube-api-access\") pod \"installer-9-crc\" (UID: \"36284b41-4184-472e-967c-f0345cf1ae81\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:06 crc kubenswrapper[4713]: I0126 15:37:06.331736 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:06 crc kubenswrapper[4713]: E0126 15:37:06.832868 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 15:37:06 crc kubenswrapper[4713]: E0126 15:37:06.833470 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kv9pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pvkg2_openshift-marketplace(b26921c6-11ce-4667-ad0c-bd7ff1366938): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:06 crc kubenswrapper[4713]: E0126 15:37:06.834665 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pvkg2" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" Jan 26 15:37:07 crc kubenswrapper[4713]: I0126 15:37:07.224595 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:37:07 crc kubenswrapper[4713]: I0126 15:37:07.225032 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:37:08 crc kubenswrapper[4713]: E0126 15:37:08.351629 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pvkg2" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" Jan 26 15:37:08 crc kubenswrapper[4713]: E0126 15:37:08.417959 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 15:37:08 crc kubenswrapper[4713]: E0126 15:37:08.418166 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67fhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4j4cb_openshift-marketplace(7263c807-ae6d-4fd4-af54-8372275f5c9a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:08 crc kubenswrapper[4713]: E0126 15:37:08.419598 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4j4cb" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" Jan 26 15:37:09 crc kubenswrapper[4713]: E0126 15:37:09.614082 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4j4cb" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" Jan 26 15:37:09 crc kubenswrapper[4713]: E0126 15:37:09.677568 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 15:37:09 crc kubenswrapper[4713]: E0126 15:37:09.677801 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xk7h9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dpdqx_openshift-marketplace(d7259d39-ff96-407d-b595-119128ba5677): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:09 crc kubenswrapper[4713]: E0126 15:37:09.680131 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dpdqx" podUID="d7259d39-ff96-407d-b595-119128ba5677" Jan 26 15:37:09 crc kubenswrapper[4713]: E0126 15:37:09.757317 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 15:37:09 crc kubenswrapper[4713]: E0126 15:37:09.757548 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfg5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5mg77_openshift-marketplace(81c9faca-c7e6-4016-b528-5a1da4deacd7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:09 crc kubenswrapper[4713]: E0126 15:37:09.758799 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5mg77" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" Jan 26 15:37:10 crc kubenswrapper[4713]: I0126 15:37:10.059494 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lsc7z"] Jan 26 15:37:11 crc kubenswrapper[4713]: E0126 15:37:11.331623 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dpdqx" podUID="d7259d39-ff96-407d-b595-119128ba5677" Jan 26 15:37:11 crc kubenswrapper[4713]: E0126 15:37:11.332436 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5mg77" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" Jan 26 15:37:11 crc kubenswrapper[4713]: E0126 15:37:11.440670 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 15:37:11 crc kubenswrapper[4713]: E0126 15:37:11.441412 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-476s2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jd4ff_openshift-marketplace(34325b63-2012-4f82-8860-c88e2847683b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:11 crc kubenswrapper[4713]: E0126 15:37:11.446677 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jd4ff" podUID="34325b63-2012-4f82-8860-c88e2847683b" Jan 26 15:37:11 crc kubenswrapper[4713]: E0126 15:37:11.451921 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 15:37:11 crc kubenswrapper[4713]: E0126 15:37:11.452160 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvcb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cbs5g_openshift-marketplace(02195b48-5845-4f33-861e-e6527590c4d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:11 crc kubenswrapper[4713]: E0126 15:37:11.456591 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cbs5g" podUID="02195b48-5845-4f33-861e-e6527590c4d9" Jan 26 15:37:11 crc kubenswrapper[4713]: E0126 15:37:11.495610 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 15:37:11 crc kubenswrapper[4713]: E0126 15:37:11.495813 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sj9w8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-x29hb_openshift-marketplace(6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:11 crc kubenswrapper[4713]: E0126 15:37:11.497578 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-x29hb" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" Jan 26 15:37:11 crc kubenswrapper[4713]: E0126 15:37:11.498660 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 15:37:11 crc kubenswrapper[4713]: E0126 15:37:11.498805 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nb4hm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jpzjd_openshift-marketplace(2cfb6957-a47e-4a83-befa-dbfc6a986ee9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:11 crc kubenswrapper[4713]: E0126 15:37:11.499968 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jpzjd" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" Jan 26 15:37:11 crc kubenswrapper[4713]: I0126 15:37:11.820097 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 15:37:11 crc kubenswrapper[4713]: W0126 15:37:11.831761 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod79d4343a_0689_4165_bfcf_f8c842163b3c.slice/crio-8fd2bc5c9865194fff039f413b52aa8dd6c988402f7c42e277781d38cec19817 WatchSource:0}: Error finding container 8fd2bc5c9865194fff039f413b52aa8dd6c988402f7c42e277781d38cec19817: Status 404 returned error can't find the container with id 8fd2bc5c9865194fff039f413b52aa8dd6c988402f7c42e277781d38cec19817 Jan 26 15:37:11 crc kubenswrapper[4713]: I0126 15:37:11.880617 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 15:37:11 crc kubenswrapper[4713]: W0126 15:37:11.892997 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod36284b41_4184_472e_967c_f0345cf1ae81.slice/crio-74440167513b833ce8dab7286003b42108e297fb930d88bd3036299d70d6c073 WatchSource:0}: Error finding container 74440167513b833ce8dab7286003b42108e297fb930d88bd3036299d70d6c073: Status 404 returned error can't find the container with id 74440167513b833ce8dab7286003b42108e297fb930d88bd3036299d70d6c073 Jan 26 15:37:12 crc kubenswrapper[4713]: I0126 15:37:12.372509 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"36284b41-4184-472e-967c-f0345cf1ae81","Type":"ContainerStarted","Data":"7610969ab31de496a676582e8b0cd61d1769a13bebb2b28c395cf4b8709abe4f"} Jan 26 15:37:12 crc kubenswrapper[4713]: I0126 15:37:12.372658 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"36284b41-4184-472e-967c-f0345cf1ae81","Type":"ContainerStarted","Data":"74440167513b833ce8dab7286003b42108e297fb930d88bd3036299d70d6c073"} Jan 26 15:37:12 crc kubenswrapper[4713]: I0126 15:37:12.377388 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-55x6b" event={"ID":"9b229eeb-448b-4abe-9ba0-fe7dfc6e589e","Type":"ContainerStarted","Data":"7f0dea5634ee392ac0ca98b6709ead54dd3928103eb2c9cc405349ec59a6b42a"} Jan 26 15:37:12 crc kubenswrapper[4713]: I0126 15:37:12.377944 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-55x6b" Jan 26 15:37:12 crc kubenswrapper[4713]: I0126 15:37:12.379075 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:37:12 crc kubenswrapper[4713]: I0126 15:37:12.379161 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:37:12 crc kubenswrapper[4713]: I0126 15:37:12.380230 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"79d4343a-0689-4165-bfcf-f8c842163b3c","Type":"ContainerStarted","Data":"cc777a8308fbed93829f52dffb0da0d7824460362609522b24de09d6ea6b5618"} Jan 26 15:37:12 crc kubenswrapper[4713]: I0126 15:37:12.380303 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"79d4343a-0689-4165-bfcf-f8c842163b3c","Type":"ContainerStarted","Data":"8fd2bc5c9865194fff039f413b52aa8dd6c988402f7c42e277781d38cec19817"} Jan 26 15:37:12 crc kubenswrapper[4713]: E0126 15:37:12.383824 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jd4ff" podUID="34325b63-2012-4f82-8860-c88e2847683b" Jan 26 15:37:12 crc kubenswrapper[4713]: E0126 15:37:12.383896 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cbs5g" podUID="02195b48-5845-4f33-861e-e6527590c4d9" Jan 26 15:37:12 crc kubenswrapper[4713]: E0126 15:37:12.383933 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jpzjd" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" Jan 26 15:37:12 crc kubenswrapper[4713]: E0126 15:37:12.384030 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-x29hb" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" Jan 26 15:37:12 crc kubenswrapper[4713]: I0126 15:37:12.394980 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=7.394952968 podStartE2EDuration="7.394952968s" podCreationTimestamp="2026-01-26 15:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:37:12.392095445 +0000 UTC m=+207.529112700" watchObservedRunningTime="2026-01-26 15:37:12.394952968 +0000 UTC m=+207.531970203" Jan 26 15:37:12 crc kubenswrapper[4713]: I0126 15:37:12.542423 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=12.542387689 podStartE2EDuration="12.542387689s" podCreationTimestamp="2026-01-26 15:37:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:37:12.539126464 +0000 UTC m=+207.676143699" watchObservedRunningTime="2026-01-26 15:37:12.542387689 +0000 UTC m=+207.679404924" Jan 26 15:37:13 crc kubenswrapper[4713]: I0126 15:37:13.387738 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:37:13 crc kubenswrapper[4713]: I0126 15:37:13.387836 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:37:15 crc kubenswrapper[4713]: I0126 15:37:15.400130 4713 generic.go:334] "Generic (PLEG): container finished" podID="79d4343a-0689-4165-bfcf-f8c842163b3c" containerID="cc777a8308fbed93829f52dffb0da0d7824460362609522b24de09d6ea6b5618" exitCode=0 Jan 26 15:37:15 crc kubenswrapper[4713]: I0126 15:37:15.400427 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"79d4343a-0689-4165-bfcf-f8c842163b3c","Type":"ContainerDied","Data":"cc777a8308fbed93829f52dffb0da0d7824460362609522b24de09d6ea6b5618"} Jan 26 15:37:16 crc kubenswrapper[4713]: I0126 15:37:16.699994 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:16 crc kubenswrapper[4713]: I0126 15:37:16.861055 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79d4343a-0689-4165-bfcf-f8c842163b3c-kubelet-dir\") pod \"79d4343a-0689-4165-bfcf-f8c842163b3c\" (UID: \"79d4343a-0689-4165-bfcf-f8c842163b3c\") " Jan 26 15:37:16 crc kubenswrapper[4713]: I0126 15:37:16.861218 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79d4343a-0689-4165-bfcf-f8c842163b3c-kube-api-access\") pod \"79d4343a-0689-4165-bfcf-f8c842163b3c\" (UID: \"79d4343a-0689-4165-bfcf-f8c842163b3c\") " Jan 26 15:37:16 crc kubenswrapper[4713]: I0126 15:37:16.861275 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79d4343a-0689-4165-bfcf-f8c842163b3c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "79d4343a-0689-4165-bfcf-f8c842163b3c" (UID: "79d4343a-0689-4165-bfcf-f8c842163b3c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:37:16 crc kubenswrapper[4713]: I0126 15:37:16.861624 4713 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79d4343a-0689-4165-bfcf-f8c842163b3c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:16 crc kubenswrapper[4713]: I0126 15:37:16.869562 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d4343a-0689-4165-bfcf-f8c842163b3c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "79d4343a-0689-4165-bfcf-f8c842163b3c" (UID: "79d4343a-0689-4165-bfcf-f8c842163b3c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:37:16 crc kubenswrapper[4713]: I0126 15:37:16.966625 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79d4343a-0689-4165-bfcf-f8c842163b3c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:17 crc kubenswrapper[4713]: I0126 15:37:17.225789 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:37:17 crc kubenswrapper[4713]: I0126 15:37:17.225877 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:37:17 crc kubenswrapper[4713]: I0126 15:37:17.225937 4713 patch_prober.go:28] interesting pod/downloads-7954f5f757-55x6b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 26 15:37:17 crc kubenswrapper[4713]: I0126 15:37:17.226042 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-55x6b" podUID="9b229eeb-448b-4abe-9ba0-fe7dfc6e589e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 26 15:37:17 crc kubenswrapper[4713]: I0126 15:37:17.414320 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"79d4343a-0689-4165-bfcf-f8c842163b3c","Type":"ContainerDied","Data":"8fd2bc5c9865194fff039f413b52aa8dd6c988402f7c42e277781d38cec19817"} Jan 26 15:37:17 crc kubenswrapper[4713]: I0126 15:37:17.414697 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fd2bc5c9865194fff039f413b52aa8dd6c988402f7c42e277781d38cec19817" Jan 26 15:37:17 crc kubenswrapper[4713]: I0126 15:37:17.414405 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:17 crc kubenswrapper[4713]: E0126 15:37:17.534929 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod79d4343a_0689_4165_bfcf_f8c842163b3c.slice\": RecentStats: unable to find data in memory cache]" Jan 26 15:37:24 crc kubenswrapper[4713]: I0126 15:37:24.465993 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvkg2" event={"ID":"b26921c6-11ce-4667-ad0c-bd7ff1366938","Type":"ContainerStarted","Data":"b794d2c20ecd400ab32cfab1f17efb0941678173f03299dfe078b96633783cd7"} Jan 26 15:37:24 crc kubenswrapper[4713]: I0126 15:37:24.469461 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4j4cb" event={"ID":"7263c807-ae6d-4fd4-af54-8372275f5c9a","Type":"ContainerStarted","Data":"c806a7bf87d3e56bbfca04de49b6567022dfdff5e3366ae22729c0ca56fcb4be"} Jan 26 15:37:25 crc kubenswrapper[4713]: I0126 15:37:25.479620 4713 generic.go:334] "Generic (PLEG): container finished" podID="b26921c6-11ce-4667-ad0c-bd7ff1366938" containerID="b794d2c20ecd400ab32cfab1f17efb0941678173f03299dfe078b96633783cd7" exitCode=0 Jan 26 15:37:25 crc kubenswrapper[4713]: I0126 15:37:25.479699 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvkg2" event={"ID":"b26921c6-11ce-4667-ad0c-bd7ff1366938","Type":"ContainerDied","Data":"b794d2c20ecd400ab32cfab1f17efb0941678173f03299dfe078b96633783cd7"} Jan 26 15:37:25 crc kubenswrapper[4713]: I0126 15:37:25.482189 4713 generic.go:334] "Generic (PLEG): container finished" podID="7263c807-ae6d-4fd4-af54-8372275f5c9a" containerID="c806a7bf87d3e56bbfca04de49b6567022dfdff5e3366ae22729c0ca56fcb4be" exitCode=0 Jan 26 15:37:25 crc kubenswrapper[4713]: I0126 15:37:25.482254 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4j4cb" event={"ID":"7263c807-ae6d-4fd4-af54-8372275f5c9a","Type":"ContainerDied","Data":"c806a7bf87d3e56bbfca04de49b6567022dfdff5e3366ae22729c0ca56fcb4be"} Jan 26 15:37:25 crc kubenswrapper[4713]: I0126 15:37:25.484185 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpzjd" event={"ID":"2cfb6957-a47e-4a83-befa-dbfc6a986ee9","Type":"ContainerStarted","Data":"69adaf695760261628c5e001fe78cdf411fe83103629119862c956ed98c9c24c"} Jan 26 15:37:26 crc kubenswrapper[4713]: I0126 15:37:26.492487 4713 generic.go:334] "Generic (PLEG): container finished" podID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" containerID="69adaf695760261628c5e001fe78cdf411fe83103629119862c956ed98c9c24c" exitCode=0 Jan 26 15:37:26 crc kubenswrapper[4713]: I0126 15:37:26.492553 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpzjd" event={"ID":"2cfb6957-a47e-4a83-befa-dbfc6a986ee9","Type":"ContainerDied","Data":"69adaf695760261628c5e001fe78cdf411fe83103629119862c956ed98c9c24c"} Jan 26 15:37:27 crc kubenswrapper[4713]: I0126 15:37:27.234015 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-55x6b" Jan 26 15:37:33 crc kubenswrapper[4713]: I0126 15:37:33.301865 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:37:33 crc kubenswrapper[4713]: I0126 15:37:33.302821 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:37:33 crc kubenswrapper[4713]: I0126 15:37:33.302873 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:37:33 crc kubenswrapper[4713]: I0126 15:37:33.303555 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c"} pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:37:33 crc kubenswrapper[4713]: I0126 15:37:33.303624 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" containerID="cri-o://38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c" gracePeriod=600 Jan 26 15:37:35 crc kubenswrapper[4713]: I0126 15:37:35.104557 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" podUID="3c2e9103-9425-4cbd-8bb6-acf4aa336228" containerName="oauth-openshift" containerID="cri-o://45522409797d0be172d2047ddadaf6a7cc256e4bdf5f22eae3d6ace8ab1d2e0d" gracePeriod=15 Jan 26 15:37:35 crc kubenswrapper[4713]: I0126 15:37:35.552234 4713 generic.go:334] "Generic (PLEG): container finished" podID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerID="38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c" exitCode=0 Jan 26 15:37:35 crc kubenswrapper[4713]: I0126 15:37:35.552294 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerDied","Data":"38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c"} Jan 26 15:37:36 crc kubenswrapper[4713]: I0126 15:37:36.560987 4713 generic.go:334] "Generic (PLEG): container finished" podID="3c2e9103-9425-4cbd-8bb6-acf4aa336228" containerID="45522409797d0be172d2047ddadaf6a7cc256e4bdf5f22eae3d6ace8ab1d2e0d" exitCode=0 Jan 26 15:37:36 crc kubenswrapper[4713]: I0126 15:37:36.561043 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" event={"ID":"3c2e9103-9425-4cbd-8bb6-acf4aa336228","Type":"ContainerDied","Data":"45522409797d0be172d2047ddadaf6a7cc256e4bdf5f22eae3d6ace8ab1d2e0d"} Jan 26 15:37:39 crc kubenswrapper[4713]: I0126 15:37:39.271098 4713 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lsc7z container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 26 15:37:39 crc kubenswrapper[4713]: I0126 15:37:39.275326 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" podUID="3c2e9103-9425-4cbd-8bb6-acf4aa336228" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 26 15:37:47 crc kubenswrapper[4713]: I0126 15:37:47.928258 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:37:47 crc kubenswrapper[4713]: I0126 15:37:47.976937 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-ccc74cc7-5w8hc"] Jan 26 15:37:47 crc kubenswrapper[4713]: E0126 15:37:47.977391 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c2e9103-9425-4cbd-8bb6-acf4aa336228" containerName="oauth-openshift" Jan 26 15:37:47 crc kubenswrapper[4713]: I0126 15:37:47.977406 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c2e9103-9425-4cbd-8bb6-acf4aa336228" containerName="oauth-openshift" Jan 26 15:37:47 crc kubenswrapper[4713]: E0126 15:37:47.977423 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d4343a-0689-4165-bfcf-f8c842163b3c" containerName="pruner" Jan 26 15:37:47 crc kubenswrapper[4713]: I0126 15:37:47.977429 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d4343a-0689-4165-bfcf-f8c842163b3c" containerName="pruner" Jan 26 15:37:47 crc kubenswrapper[4713]: I0126 15:37:47.977581 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c2e9103-9425-4cbd-8bb6-acf4aa336228" containerName="oauth-openshift" Jan 26 15:37:47 crc kubenswrapper[4713]: I0126 15:37:47.977596 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d4343a-0689-4165-bfcf-f8c842163b3c" containerName="pruner" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:47.978488 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:47.988823 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-ccc74cc7-5w8hc"] Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.061822 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c2e9103-9425-4cbd-8bb6-acf4aa336228-audit-dir\") pod \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.061950 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-cliconfig\") pod \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.061963 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c2e9103-9425-4cbd-8bb6-acf4aa336228-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3c2e9103-9425-4cbd-8bb6-acf4aa336228" (UID: "3c2e9103-9425-4cbd-8bb6-acf4aa336228"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062019 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-audit-policies\") pod \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062059 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-router-certs\") pod \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062148 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-error\") pod \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062190 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-idp-0-file-data\") pod \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062224 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-service-ca\") pod \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062285 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqj8q\" (UniqueName: \"kubernetes.io/projected/3c2e9103-9425-4cbd-8bb6-acf4aa336228-kube-api-access-dqj8q\") pod \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062350 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-ocp-branding-template\") pod \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062400 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-provider-selection\") pod \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062449 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-serving-cert\") pod \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062481 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-session\") pod \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062508 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-trusted-ca-bundle\") pod \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062540 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-login\") pod \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\" (UID: \"3c2e9103-9425-4cbd-8bb6-acf4aa336228\") " Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062788 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062831 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7tw2\" (UniqueName: \"kubernetes.io/projected/83f3415e-59a8-40a1-b6ac-77bdc12a3368-kube-api-access-j7tw2\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062872 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-serving-cert\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062895 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-router-certs\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062919 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-session\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062946 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.062982 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-user-template-error\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.063022 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.063071 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/83f3415e-59a8-40a1-b6ac-77bdc12a3368-audit-policies\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.063101 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-cliconfig\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.063130 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-user-template-login\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.063156 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-service-ca\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.063188 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.063223 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/83f3415e-59a8-40a1-b6ac-77bdc12a3368-audit-dir\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.063273 4713 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c2e9103-9425-4cbd-8bb6-acf4aa336228-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.063008 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3c2e9103-9425-4cbd-8bb6-acf4aa336228" (UID: "3c2e9103-9425-4cbd-8bb6-acf4aa336228"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.063445 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3c2e9103-9425-4cbd-8bb6-acf4aa336228" (UID: "3c2e9103-9425-4cbd-8bb6-acf4aa336228"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.063601 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3c2e9103-9425-4cbd-8bb6-acf4aa336228" (UID: "3c2e9103-9425-4cbd-8bb6-acf4aa336228"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.064053 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3c2e9103-9425-4cbd-8bb6-acf4aa336228" (UID: "3c2e9103-9425-4cbd-8bb6-acf4aa336228"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.069737 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3c2e9103-9425-4cbd-8bb6-acf4aa336228" (UID: "3c2e9103-9425-4cbd-8bb6-acf4aa336228"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.069737 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3c2e9103-9425-4cbd-8bb6-acf4aa336228" (UID: "3c2e9103-9425-4cbd-8bb6-acf4aa336228"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.069971 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3c2e9103-9425-4cbd-8bb6-acf4aa336228" (UID: "3c2e9103-9425-4cbd-8bb6-acf4aa336228"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.070449 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c2e9103-9425-4cbd-8bb6-acf4aa336228-kube-api-access-dqj8q" (OuterVolumeSpecName: "kube-api-access-dqj8q") pod "3c2e9103-9425-4cbd-8bb6-acf4aa336228" (UID: "3c2e9103-9425-4cbd-8bb6-acf4aa336228"). InnerVolumeSpecName "kube-api-access-dqj8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.070535 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3c2e9103-9425-4cbd-8bb6-acf4aa336228" (UID: "3c2e9103-9425-4cbd-8bb6-acf4aa336228"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.071015 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3c2e9103-9425-4cbd-8bb6-acf4aa336228" (UID: "3c2e9103-9425-4cbd-8bb6-acf4aa336228"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.071328 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3c2e9103-9425-4cbd-8bb6-acf4aa336228" (UID: "3c2e9103-9425-4cbd-8bb6-acf4aa336228"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.072147 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3c2e9103-9425-4cbd-8bb6-acf4aa336228" (UID: "3c2e9103-9425-4cbd-8bb6-acf4aa336228"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.078905 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3c2e9103-9425-4cbd-8bb6-acf4aa336228" (UID: "3c2e9103-9425-4cbd-8bb6-acf4aa336228"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165189 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7tw2\" (UniqueName: \"kubernetes.io/projected/83f3415e-59a8-40a1-b6ac-77bdc12a3368-kube-api-access-j7tw2\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165287 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-serving-cert\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165328 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-router-certs\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165347 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-session\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165380 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165407 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-user-template-error\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165434 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165466 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/83f3415e-59a8-40a1-b6ac-77bdc12a3368-audit-policies\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165485 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-cliconfig\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165507 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-user-template-login\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165527 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-service-ca\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165547 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165568 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/83f3415e-59a8-40a1-b6ac-77bdc12a3368-audit-dir\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165600 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165643 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165658 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165668 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165679 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165690 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165699 4713 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165709 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165719 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165729 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165739 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165749 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqj8q\" (UniqueName: \"kubernetes.io/projected/3c2e9103-9425-4cbd-8bb6-acf4aa336228-kube-api-access-dqj8q\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165762 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.165774 4713 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c2e9103-9425-4cbd-8bb6-acf4aa336228-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.167015 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-service-ca\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.167113 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/83f3415e-59a8-40a1-b6ac-77bdc12a3368-audit-dir\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.167635 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/83f3415e-59a8-40a1-b6ac-77bdc12a3368-audit-policies\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.168183 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-cliconfig\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.169080 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.171570 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-router-certs\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.172036 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-serving-cert\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.172051 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.172209 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-user-template-error\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.173027 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.173972 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.175635 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-user-template-login\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.180022 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/83f3415e-59a8-40a1-b6ac-77bdc12a3368-v4-0-config-system-session\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.186191 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7tw2\" (UniqueName: \"kubernetes.io/projected/83f3415e-59a8-40a1-b6ac-77bdc12a3368-kube-api-access-j7tw2\") pod \"oauth-openshift-ccc74cc7-5w8hc\" (UID: \"83f3415e-59a8-40a1-b6ac-77bdc12a3368\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.340202 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.652941 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" event={"ID":"3c2e9103-9425-4cbd-8bb6-acf4aa336228","Type":"ContainerDied","Data":"961bd20a7dd186c2344da263db19ce430de5645816a26beb3278878767445df7"} Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.653034 4713 scope.go:117] "RemoveContainer" containerID="45522409797d0be172d2047ddadaf6a7cc256e4bdf5f22eae3d6ace8ab1d2e0d" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.653058 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lsc7z" Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.697806 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lsc7z"] Jan 26 15:37:48 crc kubenswrapper[4713]: I0126 15:37:48.701526 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lsc7z"] Jan 26 15:37:49 crc kubenswrapper[4713]: I0126 15:37:49.812481 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c2e9103-9425-4cbd-8bb6-acf4aa336228" path="/var/lib/kubelet/pods/3c2e9103-9425-4cbd-8bb6-acf4aa336228/volumes" Jan 26 15:37:49 crc kubenswrapper[4713]: I0126 15:37:49.997500 4713 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 15:37:49 crc kubenswrapper[4713]: I0126 15:37:49.997836 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15" gracePeriod=15 Jan 26 15:37:49 crc kubenswrapper[4713]: I0126 15:37:49.997880 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5" gracePeriod=15 Jan 26 15:37:49 crc kubenswrapper[4713]: I0126 15:37:49.997976 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493" gracePeriod=15 Jan 26 15:37:49 crc kubenswrapper[4713]: I0126 15:37:49.998025 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888" gracePeriod=15 Jan 26 15:37:49 crc kubenswrapper[4713]: I0126 15:37:49.998057 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24" gracePeriod=15 Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.000010 4713 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 15:37:50 crc kubenswrapper[4713]: E0126 15:37:50.000356 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.000400 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 15:37:50 crc kubenswrapper[4713]: E0126 15:37:50.000424 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.000433 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 15:37:50 crc kubenswrapper[4713]: E0126 15:37:50.000443 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.000450 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 15:37:50 crc kubenswrapper[4713]: E0126 15:37:50.000460 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.000468 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 15:37:50 crc kubenswrapper[4713]: E0126 15:37:50.000480 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.000489 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 15:37:50 crc kubenswrapper[4713]: E0126 15:37:50.000503 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.000510 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 15:37:50 crc kubenswrapper[4713]: E0126 15:37:50.000521 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.000528 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.000663 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.000677 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.000688 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.000698 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.000708 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.000721 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.004960 4713 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.005869 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.010913 4713 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 26 15:37:50 crc kubenswrapper[4713]: E0126 15:37:50.091151 4713 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-tn7l2.188e51cdf7e2e2c3\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{machine-config-daemon-tn7l2.188e51cdf7e2e2c3 openshift-machine-config-operator 26710 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-tn7l2,UID:f608dd80-4cbf-4490-b062-2bef233d25d1,APIVersion:v1,ResourceVersion:26683,FieldPath:spec.containers{machine-config-daemon},},Reason:Created,Message:Created container machine-config-daemon,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 15:34:08 +0000 UTC,LastTimestamp:2026-01-26 15:37:50.089701569 +0000 UTC m=+245.226718804,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 15:37:50 crc kubenswrapper[4713]: E0126 15:37:50.091844 4713 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.095727 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.095775 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.095822 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.095869 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.095898 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.095924 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.095972 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.096022 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.197319 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.197679 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.197708 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.197729 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.197764 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.197798 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.197822 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.197848 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.197874 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.197898 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.197939 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.197490 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.197845 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.197952 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.197987 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.198164 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.399425 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:50 crc kubenswrapper[4713]: W0126 15:37:50.426596 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-4e2e39e7f2f207804e440e7974fc922af49638878cd0308d104b8d12f7a7e3aa WatchSource:0}: Error finding container 4e2e39e7f2f207804e440e7974fc922af49638878cd0308d104b8d12f7a7e3aa: Status 404 returned error can't find the container with id 4e2e39e7f2f207804e440e7974fc922af49638878cd0308d104b8d12f7a7e3aa Jan 26 15:37:50 crc kubenswrapper[4713]: E0126 15:37:50.658588 4713 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 26 15:37:50 crc kubenswrapper[4713]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-ccc74cc7-5w8hc_openshift-authentication_83f3415e-59a8-40a1-b6ac-77bdc12a3368_0(7d9b8a18e6e795e6d5ede020092e28229a41ce133fd350b09fc2839f75c1f8c2): error adding pod openshift-authentication_oauth-openshift-ccc74cc7-5w8hc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d9b8a18e6e795e6d5ede020092e28229a41ce133fd350b09fc2839f75c1f8c2" Netns:"/var/run/netns/1423d801-75a2-42c3-b02f-316be0e6efa7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-ccc74cc7-5w8hc;K8S_POD_INFRA_CONTAINER_ID=7d9b8a18e6e795e6d5ede020092e28229a41ce133fd350b09fc2839f75c1f8c2;K8S_POD_UID=83f3415e-59a8-40a1-b6ac-77bdc12a3368" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc] networking: Multus: [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc/83f3415e-59a8-40a1-b6ac-77bdc12a3368]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-ccc74cc7-5w8hc?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:37:50 crc kubenswrapper[4713]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 15:37:50 crc kubenswrapper[4713]: > Jan 26 15:37:50 crc kubenswrapper[4713]: E0126 15:37:50.659148 4713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 26 15:37:50 crc kubenswrapper[4713]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-ccc74cc7-5w8hc_openshift-authentication_83f3415e-59a8-40a1-b6ac-77bdc12a3368_0(7d9b8a18e6e795e6d5ede020092e28229a41ce133fd350b09fc2839f75c1f8c2): error adding pod openshift-authentication_oauth-openshift-ccc74cc7-5w8hc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d9b8a18e6e795e6d5ede020092e28229a41ce133fd350b09fc2839f75c1f8c2" Netns:"/var/run/netns/1423d801-75a2-42c3-b02f-316be0e6efa7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-ccc74cc7-5w8hc;K8S_POD_INFRA_CONTAINER_ID=7d9b8a18e6e795e6d5ede020092e28229a41ce133fd350b09fc2839f75c1f8c2;K8S_POD_UID=83f3415e-59a8-40a1-b6ac-77bdc12a3368" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc] networking: Multus: [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc/83f3415e-59a8-40a1-b6ac-77bdc12a3368]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-ccc74cc7-5w8hc?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:37:50 crc kubenswrapper[4713]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 15:37:50 crc kubenswrapper[4713]: > pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:50 crc kubenswrapper[4713]: E0126 15:37:50.659188 4713 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 26 15:37:50 crc kubenswrapper[4713]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-ccc74cc7-5w8hc_openshift-authentication_83f3415e-59a8-40a1-b6ac-77bdc12a3368_0(7d9b8a18e6e795e6d5ede020092e28229a41ce133fd350b09fc2839f75c1f8c2): error adding pod openshift-authentication_oauth-openshift-ccc74cc7-5w8hc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d9b8a18e6e795e6d5ede020092e28229a41ce133fd350b09fc2839f75c1f8c2" Netns:"/var/run/netns/1423d801-75a2-42c3-b02f-316be0e6efa7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-ccc74cc7-5w8hc;K8S_POD_INFRA_CONTAINER_ID=7d9b8a18e6e795e6d5ede020092e28229a41ce133fd350b09fc2839f75c1f8c2;K8S_POD_UID=83f3415e-59a8-40a1-b6ac-77bdc12a3368" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc] networking: Multus: [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc/83f3415e-59a8-40a1-b6ac-77bdc12a3368]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-ccc74cc7-5w8hc?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:37:50 crc kubenswrapper[4713]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 15:37:50 crc kubenswrapper[4713]: > pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:50 crc kubenswrapper[4713]: E0126 15:37:50.659281 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-ccc74cc7-5w8hc_openshift-authentication(83f3415e-59a8-40a1-b6ac-77bdc12a3368)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-ccc74cc7-5w8hc_openshift-authentication(83f3415e-59a8-40a1-b6ac-77bdc12a3368)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-ccc74cc7-5w8hc_openshift-authentication_83f3415e-59a8-40a1-b6ac-77bdc12a3368_0(7d9b8a18e6e795e6d5ede020092e28229a41ce133fd350b09fc2839f75c1f8c2): error adding pod openshift-authentication_oauth-openshift-ccc74cc7-5w8hc to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7d9b8a18e6e795e6d5ede020092e28229a41ce133fd350b09fc2839f75c1f8c2\\\" Netns:\\\"/var/run/netns/1423d801-75a2-42c3-b02f-316be0e6efa7\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-ccc74cc7-5w8hc;K8S_POD_INFRA_CONTAINER_ID=7d9b8a18e6e795e6d5ede020092e28229a41ce133fd350b09fc2839f75c1f8c2;K8S_POD_UID=83f3415e-59a8-40a1-b6ac-77bdc12a3368\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc] networking: Multus: [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc/83f3415e-59a8-40a1-b6ac-77bdc12a3368]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-ccc74cc7-5w8hc?timeout=1m0s\\\": dial tcp 38.102.83.194:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" podUID="83f3415e-59a8-40a1-b6ac-77bdc12a3368" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.669592 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mg77" event={"ID":"81c9faca-c7e6-4016-b528-5a1da4deacd7","Type":"ContainerStarted","Data":"85027c2bd1df3c84c1fb207919564a9bb15fb82c858dd52edf17921038cc6991"} Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.672279 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"3c0b34afb80cf07b93de951a7abde48be6ab6179835763ec54e5cb9bb0493d59"} Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.673287 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.674785 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.676084 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.678684 4713 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5" exitCode=0 Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.678717 4713 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493" exitCode=0 Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.678732 4713 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888" exitCode=0 Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.678743 4713 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24" exitCode=2 Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.678813 4713 scope.go:117] "RemoveContainer" containerID="14343469d36eac16cb647cd1629b1099423cf68e52d4931443830447395b2f22" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.682842 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpzjd" event={"ID":"2cfb6957-a47e-4a83-befa-dbfc6a986ee9","Type":"ContainerStarted","Data":"5df0c11aceab7acc84323883cb1bb9fafcbc10fa0fcfb674c64ca06e287cf4c7"} Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.684457 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.684707 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.686034 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x29hb" event={"ID":"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6","Type":"ContainerStarted","Data":"ef691122833354f136720a3db0cf647f33f77d4d4594259ef777e2207506ae54"} Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.686760 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4e2e39e7f2f207804e440e7974fc922af49638878cd0308d104b8d12f7a7e3aa"} Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.688611 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4j4cb" event={"ID":"7263c807-ae6d-4fd4-af54-8372275f5c9a","Type":"ContainerStarted","Data":"c5ef793df860280a1a8f972035348631eb182bdc49fc482c640cbb1c365f1740"} Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.689626 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.689806 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.690012 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.691759 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.691770 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd4ff" event={"ID":"34325b63-2012-4f82-8860-c88e2847683b","Type":"ContainerStarted","Data":"d96d8964499f5202a86b5ba55a7c4a40af4b3fe89e59900834ed5673fd12b6a2"} Jan 26 15:37:50 crc kubenswrapper[4713]: I0126 15:37:50.692251 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:51 crc kubenswrapper[4713]: E0126 15:37:51.337881 4713 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 26 15:37:51 crc kubenswrapper[4713]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-ccc74cc7-5w8hc_openshift-authentication_83f3415e-59a8-40a1-b6ac-77bdc12a3368_0(ff4dafaad73f0cb5eb7820be103c6873173a7a8ce43727b9dfb5d67809320649): error adding pod openshift-authentication_oauth-openshift-ccc74cc7-5w8hc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ff4dafaad73f0cb5eb7820be103c6873173a7a8ce43727b9dfb5d67809320649" Netns:"/var/run/netns/535594a5-dd10-4409-b68a-2ef60739d7ac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-ccc74cc7-5w8hc;K8S_POD_INFRA_CONTAINER_ID=ff4dafaad73f0cb5eb7820be103c6873173a7a8ce43727b9dfb5d67809320649;K8S_POD_UID=83f3415e-59a8-40a1-b6ac-77bdc12a3368" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc] networking: Multus: [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc/83f3415e-59a8-40a1-b6ac-77bdc12a3368]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-ccc74cc7-5w8hc?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:37:51 crc kubenswrapper[4713]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 15:37:51 crc kubenswrapper[4713]: > Jan 26 15:37:51 crc kubenswrapper[4713]: E0126 15:37:51.338456 4713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 26 15:37:51 crc kubenswrapper[4713]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-ccc74cc7-5w8hc_openshift-authentication_83f3415e-59a8-40a1-b6ac-77bdc12a3368_0(ff4dafaad73f0cb5eb7820be103c6873173a7a8ce43727b9dfb5d67809320649): error adding pod openshift-authentication_oauth-openshift-ccc74cc7-5w8hc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ff4dafaad73f0cb5eb7820be103c6873173a7a8ce43727b9dfb5d67809320649" Netns:"/var/run/netns/535594a5-dd10-4409-b68a-2ef60739d7ac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-ccc74cc7-5w8hc;K8S_POD_INFRA_CONTAINER_ID=ff4dafaad73f0cb5eb7820be103c6873173a7a8ce43727b9dfb5d67809320649;K8S_POD_UID=83f3415e-59a8-40a1-b6ac-77bdc12a3368" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc] networking: Multus: [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc/83f3415e-59a8-40a1-b6ac-77bdc12a3368]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-ccc74cc7-5w8hc?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:37:51 crc kubenswrapper[4713]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 15:37:51 crc kubenswrapper[4713]: > pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:51 crc kubenswrapper[4713]: E0126 15:37:51.338495 4713 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 26 15:37:51 crc kubenswrapper[4713]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-ccc74cc7-5w8hc_openshift-authentication_83f3415e-59a8-40a1-b6ac-77bdc12a3368_0(ff4dafaad73f0cb5eb7820be103c6873173a7a8ce43727b9dfb5d67809320649): error adding pod openshift-authentication_oauth-openshift-ccc74cc7-5w8hc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ff4dafaad73f0cb5eb7820be103c6873173a7a8ce43727b9dfb5d67809320649" Netns:"/var/run/netns/535594a5-dd10-4409-b68a-2ef60739d7ac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-ccc74cc7-5w8hc;K8S_POD_INFRA_CONTAINER_ID=ff4dafaad73f0cb5eb7820be103c6873173a7a8ce43727b9dfb5d67809320649;K8S_POD_UID=83f3415e-59a8-40a1-b6ac-77bdc12a3368" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc] networking: Multus: [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc/83f3415e-59a8-40a1-b6ac-77bdc12a3368]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-ccc74cc7-5w8hc?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:37:51 crc kubenswrapper[4713]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 15:37:51 crc kubenswrapper[4713]: > pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:37:51 crc kubenswrapper[4713]: E0126 15:37:51.338589 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-ccc74cc7-5w8hc_openshift-authentication(83f3415e-59a8-40a1-b6ac-77bdc12a3368)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-ccc74cc7-5w8hc_openshift-authentication(83f3415e-59a8-40a1-b6ac-77bdc12a3368)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-ccc74cc7-5w8hc_openshift-authentication_83f3415e-59a8-40a1-b6ac-77bdc12a3368_0(ff4dafaad73f0cb5eb7820be103c6873173a7a8ce43727b9dfb5d67809320649): error adding pod openshift-authentication_oauth-openshift-ccc74cc7-5w8hc to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"ff4dafaad73f0cb5eb7820be103c6873173a7a8ce43727b9dfb5d67809320649\\\" Netns:\\\"/var/run/netns/535594a5-dd10-4409-b68a-2ef60739d7ac\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-ccc74cc7-5w8hc;K8S_POD_INFRA_CONTAINER_ID=ff4dafaad73f0cb5eb7820be103c6873173a7a8ce43727b9dfb5d67809320649;K8S_POD_UID=83f3415e-59a8-40a1-b6ac-77bdc12a3368\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc] networking: Multus: [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc/83f3415e-59a8-40a1-b6ac-77bdc12a3368]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-ccc74cc7-5w8hc?timeout=1m0s\\\": dial tcp 38.102.83.194:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" podUID="83f3415e-59a8-40a1-b6ac-77bdc12a3368" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.699901 4713 generic.go:334] "Generic (PLEG): container finished" podID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" containerID="ef691122833354f136720a3db0cf647f33f77d4d4594259ef777e2207506ae54" exitCode=0 Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.700044 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x29hb" event={"ID":"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6","Type":"ContainerDied","Data":"ef691122833354f136720a3db0cf647f33f77d4d4594259ef777e2207506ae54"} Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.700881 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.701290 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.701909 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.701928 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c9b10bd378e1fdb773488fb955ab36931092f7ca76ee5ade74977351e582531e"} Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.702425 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: E0126 15:37:51.702647 4713 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.702823 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.703010 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.703475 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.703999 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.704728 4713 generic.go:334] "Generic (PLEG): container finished" podID="02195b48-5845-4f33-861e-e6527590c4d9" containerID="60d69218d47c032f428e572b5b05b3bb4ed68b4dffd739a2c1ffbacb5a2a60b5" exitCode=0 Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.704793 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbs5g" event={"ID":"02195b48-5845-4f33-861e-e6527590c4d9","Type":"ContainerDied","Data":"60d69218d47c032f428e572b5b05b3bb4ed68b4dffd739a2c1ffbacb5a2a60b5"} Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.705801 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.706158 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.707675 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.708341 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.708653 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.711210 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvkg2" event={"ID":"b26921c6-11ce-4667-ad0c-bd7ff1366938","Type":"ContainerStarted","Data":"b28fe47b065f8d3f2a5d0a63990a294c25ad99f0142a582b3428c315dd956664"} Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.712193 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.712548 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.712876 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.713092 4713 generic.go:334] "Generic (PLEG): container finished" podID="81c9faca-c7e6-4016-b528-5a1da4deacd7" containerID="85027c2bd1df3c84c1fb207919564a9bb15fb82c858dd52edf17921038cc6991" exitCode=0 Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.713148 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.713190 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mg77" event={"ID":"81c9faca-c7e6-4016-b528-5a1da4deacd7","Type":"ContainerDied","Data":"85027c2bd1df3c84c1fb207919564a9bb15fb82c858dd52edf17921038cc6991"} Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.713524 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.713897 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.714382 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.714660 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.714993 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.715285 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.715573 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.715870 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.716186 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.719865 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.723442 4713 generic.go:334] "Generic (PLEG): container finished" podID="36284b41-4184-472e-967c-f0345cf1ae81" containerID="7610969ab31de496a676582e8b0cd61d1769a13bebb2b28c395cf4b8709abe4f" exitCode=0 Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.723492 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"36284b41-4184-472e-967c-f0345cf1ae81","Type":"ContainerDied","Data":"7610969ab31de496a676582e8b0cd61d1769a13bebb2b28c395cf4b8709abe4f"} Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.724439 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.725402 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.725873 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.725890 4713 generic.go:334] "Generic (PLEG): container finished" podID="d7259d39-ff96-407d-b595-119128ba5677" containerID="0f0dd135dff68e37a6f62007c814e73bbc8c39e252b2a539af6188333ac4383f" exitCode=0 Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.725938 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpdqx" event={"ID":"d7259d39-ff96-407d-b595-119128ba5677","Type":"ContainerDied","Data":"0f0dd135dff68e37a6f62007c814e73bbc8c39e252b2a539af6188333ac4383f"} Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.726066 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.726346 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.726634 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.727092 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.727573 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.728009 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.728318 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.728574 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.728814 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.728944 4713 generic.go:334] "Generic (PLEG): container finished" podID="34325b63-2012-4f82-8860-c88e2847683b" containerID="d96d8964499f5202a86b5ba55a7c4a40af4b3fe89e59900834ed5673fd12b6a2" exitCode=0 Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.729026 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd4ff" event={"ID":"34325b63-2012-4f82-8860-c88e2847683b","Type":"ContainerDied","Data":"d96d8964499f5202a86b5ba55a7c4a40af4b3fe89e59900834ed5673fd12b6a2"} Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.729125 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.729553 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.730021 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.731765 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.732204 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.732927 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.733207 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.733539 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.733843 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.734153 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.734489 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.734803 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.735161 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.735570 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:51 crc kubenswrapper[4713]: I0126 15:37:51.735857 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.744028 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbs5g" event={"ID":"02195b48-5845-4f33-861e-e6527590c4d9","Type":"ContainerStarted","Data":"1fee69ce490a780e1d2a5bc6bbce49acb94dfbd30fc1da8fcc5e2c564a49694c"} Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.746788 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.750595 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.752483 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.752696 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.752934 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.753156 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.753507 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.753753 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.753962 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.754149 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.754436 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.755024 4713 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15" exitCode=0 Jan 26 15:37:52 crc kubenswrapper[4713]: E0126 15:37:52.755740 4713 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:37:52 crc kubenswrapper[4713]: E0126 15:37:52.857562 4713 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-tn7l2.188e51cdf7e2e2c3\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{machine-config-daemon-tn7l2.188e51cdf7e2e2c3 openshift-machine-config-operator 26710 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-tn7l2,UID:f608dd80-4cbf-4490-b062-2bef233d25d1,APIVersion:v1,ResourceVersion:26683,FieldPath:spec.containers{machine-config-daemon},},Reason:Created,Message:Created container machine-config-daemon,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 15:34:08 +0000 UTC,LastTimestamp:2026-01-26 15:37:50.089701569 +0000 UTC m=+245.226718804,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.929283 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.933226 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.933665 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.946622 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.951944 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.952316 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.954707 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.954982 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.955219 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.955536 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.955778 4713 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.956072 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:52 crc kubenswrapper[4713]: I0126 15:37:52.956436 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.042602 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.042671 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.042714 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.042756 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.042756 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.042895 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.043080 4713 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.043099 4713 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.043111 4713 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.058666 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.059795 4713 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.060600 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.061103 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.061412 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.061623 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.061854 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.062099 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.062356 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.062598 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.062878 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.063131 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.143934 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36284b41-4184-472e-967c-f0345cf1ae81-var-lock\") pod \"36284b41-4184-472e-967c-f0345cf1ae81\" (UID: \"36284b41-4184-472e-967c-f0345cf1ae81\") " Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.144508 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36284b41-4184-472e-967c-f0345cf1ae81-kube-api-access\") pod \"36284b41-4184-472e-967c-f0345cf1ae81\" (UID: \"36284b41-4184-472e-967c-f0345cf1ae81\") " Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.144065 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36284b41-4184-472e-967c-f0345cf1ae81-var-lock" (OuterVolumeSpecName: "var-lock") pod "36284b41-4184-472e-967c-f0345cf1ae81" (UID: "36284b41-4184-472e-967c-f0345cf1ae81"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.144569 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36284b41-4184-472e-967c-f0345cf1ae81-kubelet-dir\") pod \"36284b41-4184-472e-967c-f0345cf1ae81\" (UID: \"36284b41-4184-472e-967c-f0345cf1ae81\") " Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.144673 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36284b41-4184-472e-967c-f0345cf1ae81-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "36284b41-4184-472e-967c-f0345cf1ae81" (UID: "36284b41-4184-472e-967c-f0345cf1ae81"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.144763 4713 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36284b41-4184-472e-967c-f0345cf1ae81-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.144781 4713 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36284b41-4184-472e-967c-f0345cf1ae81-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.152794 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36284b41-4184-472e-967c-f0345cf1ae81-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "36284b41-4184-472e-967c-f0345cf1ae81" (UID: "36284b41-4184-472e-967c-f0345cf1ae81"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.253884 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36284b41-4184-472e-967c-f0345cf1ae81-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.514581 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.514642 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.773625 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.777901 4713 scope.go:117] "RemoveContainer" containerID="bcda87c77ac409f0d9972812ace9fbcbe4ecee15304d8ceb4d6e038063cfded5" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.777957 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.779931 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"36284b41-4184-472e-967c-f0345cf1ae81","Type":"ContainerDied","Data":"74440167513b833ce8dab7286003b42108e297fb930d88bd3036299d70d6c073"} Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.779966 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74440167513b833ce8dab7286003b42108e297fb930d88bd3036299d70d6c073" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.780004 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.784350 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x29hb" event={"ID":"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6","Type":"ContainerStarted","Data":"e240cd400170094d9e7e88ae9a1e433814c4a83953eb3585f8e0a5c9824d8f52"} Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.785172 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.785678 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.786164 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.786684 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.787007 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.787166 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.787333 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.787565 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.787809 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.788085 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.788530 4713 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.794192 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpdqx" event={"ID":"d7259d39-ff96-407d-b595-119128ba5677","Type":"ContainerStarted","Data":"ff34d93c0ec94755250d1375aa3c4ceb29b1cc04b990337b96e3d5eac8944246"} Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.795421 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.798401 4713 scope.go:117] "RemoveContainer" containerID="696a9055d47f78f48b80799a59228d25c82b49e8a6e0e84ef9f1bb6340a76493" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.798518 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.798831 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.799031 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.799266 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.802567 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.802784 4713 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.802987 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.803166 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.803343 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.803609 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.803964 4713 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.804211 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.804943 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.805442 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.805722 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.806051 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.806284 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.806557 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.806827 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.807069 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.807335 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.813241 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.816415 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.817525 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd4ff" event={"ID":"34325b63-2012-4f82-8860-c88e2847683b","Type":"ContainerStarted","Data":"3b688c9709113bdf15283eaf85b8703d2f42d48ff4a4b590d150d9fa3913008f"} Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.817766 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.818238 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.819185 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.819423 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.819703 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.820087 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.820254 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.820716 4713 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.821141 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.821344 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.824680 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mg77" event={"ID":"81c9faca-c7e6-4016-b528-5a1da4deacd7","Type":"ContainerStarted","Data":"b7b1e329b5b9eb758136695b206bb3c1ad48755c0ace7c910a47ef93d45eaddc"} Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.826055 4713 scope.go:117] "RemoveContainer" containerID="4d0ab7a882f0d0c107fce8a94d1ae6b4393c4cb78effd89b33032d39e1068888" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.826056 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.826294 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.826500 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.826651 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.826813 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.826957 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.827106 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.827330 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.831597 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.832077 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.845535 4713 scope.go:117] "RemoveContainer" containerID="d6c300af98ce13bedb634804e706b934a89ecf191e3c14c83ce6db1edb58ea24" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.861467 4713 scope.go:117] "RemoveContainer" containerID="9fcabd7faf799b58f869165883e295e0cdd1acb68e68314ceac0541382766d15" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.876354 4713 scope.go:117] "RemoveContainer" containerID="a3ed0950f032f66b9bab2891a789ebfa74852c3cf8e069cc347e3ec08dd365d3" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.983545 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:37:53 crc kubenswrapper[4713]: I0126 15:37:53.983639 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:37:54 crc kubenswrapper[4713]: I0126 15:37:54.632881 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jpzjd" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" containerName="registry-server" probeResult="failure" output=< Jan 26 15:37:54 crc kubenswrapper[4713]: timeout: failed to connect service ":50051" within 1s Jan 26 15:37:54 crc kubenswrapper[4713]: > Jan 26 15:37:55 crc kubenswrapper[4713]: I0126 15:37:55.027052 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pvkg2" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" containerName="registry-server" probeResult="failure" output=< Jan 26 15:37:55 crc kubenswrapper[4713]: timeout: failed to connect service ":50051" within 1s Jan 26 15:37:55 crc kubenswrapper[4713]: > Jan 26 15:37:55 crc kubenswrapper[4713]: I0126 15:37:55.808809 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:55 crc kubenswrapper[4713]: I0126 15:37:55.809227 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:55 crc kubenswrapper[4713]: I0126 15:37:55.809709 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:55 crc kubenswrapper[4713]: I0126 15:37:55.810299 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:55 crc kubenswrapper[4713]: I0126 15:37:55.810751 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:55 crc kubenswrapper[4713]: I0126 15:37:55.811268 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:55 crc kubenswrapper[4713]: I0126 15:37:55.811615 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:55 crc kubenswrapper[4713]: I0126 15:37:55.811986 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:55 crc kubenswrapper[4713]: I0126 15:37:55.812327 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:55 crc kubenswrapper[4713]: I0126 15:37:55.814253 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:56 crc kubenswrapper[4713]: E0126 15:37:56.922678 4713 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:56 crc kubenswrapper[4713]: E0126 15:37:56.923127 4713 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:56 crc kubenswrapper[4713]: E0126 15:37:56.923797 4713 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:56 crc kubenswrapper[4713]: E0126 15:37:56.924259 4713 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:56 crc kubenswrapper[4713]: E0126 15:37:56.924638 4713 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:56 crc kubenswrapper[4713]: I0126 15:37:56.924681 4713 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 15:37:56 crc kubenswrapper[4713]: E0126 15:37:56.925021 4713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Jan 26 15:37:57 crc kubenswrapper[4713]: E0126 15:37:57.126590 4713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Jan 26 15:37:57 crc kubenswrapper[4713]: E0126 15:37:57.528123 4713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Jan 26 15:37:58 crc kubenswrapper[4713]: E0126 15:37:58.111665 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:37:58Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:37:58Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:37:58Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:37:58Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:024b1ed0676c2e11f6a319392c82e7acd0ceeae31ca00b202307c4d86a796b20\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ada03173793960eaa0e4263282fcbf5af3dea8aaf2c3b0d864906108db062e8a\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1672061160},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1203425009},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:baf4eb931aab99ddd36e09d79f76ea1128c2ef536e95b78edb9af73175db2be3\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:dfb030ab67faacd3572a0cae805bd05f041ba6a589cf6fb289cb2295f364c580\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1183907051},{\\\"names\\\":[],\\\"sizeBytes\\\":1179648738},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:58 crc kubenswrapper[4713]: E0126 15:37:58.112615 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:58 crc kubenswrapper[4713]: E0126 15:37:58.112910 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:58 crc kubenswrapper[4713]: E0126 15:37:58.113166 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:58 crc kubenswrapper[4713]: E0126 15:37:58.113420 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:37:58 crc kubenswrapper[4713]: E0126 15:37:58.113447 4713 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:37:58 crc kubenswrapper[4713]: E0126 15:37:58.329572 4713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Jan 26 15:37:59 crc kubenswrapper[4713]: E0126 15:37:59.931172 4713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="3.2s" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.265425 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.265882 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.313193 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.314320 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.315556 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.315815 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.316143 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.316574 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.316933 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.317212 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.317804 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.318772 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.319034 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.722812 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.722919 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.775065 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.775897 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.776434 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.776810 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.778904 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.779692 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.780340 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.780712 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.781070 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.781752 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.782018 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.797125 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.797488 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.870782 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.871638 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.880926 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.881479 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.881903 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.882064 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.885004 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.886534 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.887132 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.887581 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.888278 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.889059 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.889634 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.927180 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.928248 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.928851 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.929436 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.929725 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.929794 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.930027 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.930329 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.930657 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.931011 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.931331 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.931684 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.932161 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.932455 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.932765 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.932793 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.933132 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.933484 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.933879 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.934488 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.934728 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.935205 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.935658 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.936054 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.936490 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.936850 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.937320 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.937668 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.938023 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.938412 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.938725 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.939001 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:00 crc kubenswrapper[4713]: I0126 15:38:00.939284 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:01 crc kubenswrapper[4713]: E0126 15:38:01.872079 4713 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" volumeName="registry-storage" Jan 26 15:38:01 crc kubenswrapper[4713]: I0126 15:38:01.930483 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:38:01 crc kubenswrapper[4713]: I0126 15:38:01.931342 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:01 crc kubenswrapper[4713]: I0126 15:38:01.931791 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:01 crc kubenswrapper[4713]: I0126 15:38:01.932095 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:01 crc kubenswrapper[4713]: I0126 15:38:01.932476 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:01 crc kubenswrapper[4713]: I0126 15:38:01.933262 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:01 crc kubenswrapper[4713]: I0126 15:38:01.933601 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:01 crc kubenswrapper[4713]: I0126 15:38:01.933989 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:01 crc kubenswrapper[4713]: I0126 15:38:01.934411 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:01 crc kubenswrapper[4713]: I0126 15:38:01.934754 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:01 crc kubenswrapper[4713]: I0126 15:38:01.935080 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:02 crc kubenswrapper[4713]: E0126 15:38:02.859351 4713 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-tn7l2.188e51cdf7e2e2c3\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{machine-config-daemon-tn7l2.188e51cdf7e2e2c3 openshift-machine-config-operator 26710 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-tn7l2,UID:f608dd80-4cbf-4490-b062-2bef233d25d1,APIVersion:v1,ResourceVersion:26683,FieldPath:spec.containers{machine-config-daemon},},Reason:Created,Message:Created container machine-config-daemon,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 15:34:08 +0000 UTC,LastTimestamp:2026-01-26 15:37:50.089701569 +0000 UTC m=+245.226718804,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 15:38:02 crc kubenswrapper[4713]: I0126 15:38:02.923817 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-x29hb" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" containerName="registry-server" probeResult="failure" output=< Jan 26 15:38:02 crc kubenswrapper[4713]: timeout: failed to connect service ":50051" within 1s Jan 26 15:38:02 crc kubenswrapper[4713]: > Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.055164 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.055279 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.127092 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.127904 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.128652 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.129693 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.130468 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.130961 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.131816 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: E0126 15:38:03.131827 4713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="6.4s" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.132488 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.133015 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.133823 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.134276 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.194254 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.194410 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.261996 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.262912 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.263556 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.264195 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.264638 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.265121 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.265753 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.266255 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.266774 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.267350 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.267931 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.583389 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.584308 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.584899 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.585407 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.586231 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.586852 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.587717 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.588704 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.589163 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.589715 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.590476 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.638751 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.639421 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.639843 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.640196 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.640564 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.640967 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.641545 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.642182 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.642691 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.643118 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.643562 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.676333 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.677086 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.678327 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.679175 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.679860 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.680471 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.680840 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.681239 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.681720 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.682009 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.682345 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.962980 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.963827 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.964405 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.964717 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.964739 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.965712 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.966098 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.966581 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.967070 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.967542 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.967940 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.968339 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.968928 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.969315 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.969740 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.970152 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.970555 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.970952 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.971437 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.971824 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.972322 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:03 crc kubenswrapper[4713]: I0126 15:38:03.972782 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.045884 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.046938 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.047566 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.048160 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.049029 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.049550 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.049976 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.050587 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.051047 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.051554 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.052098 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.093095 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.094108 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.094680 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.095312 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.095758 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.096184 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.096758 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.097286 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.097742 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.098242 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.098664 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.349040 4713 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.349115 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.802808 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.804539 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.805247 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.805949 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.806424 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.806850 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.807281 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.807825 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.808410 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.808903 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.809423 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.828737 4713 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="93543424-4011-4a77-a471-5f0ef9989535" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.828798 4713 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="93543424-4011-4a77-a471-5f0ef9989535" Jan 26 15:38:04 crc kubenswrapper[4713]: E0126 15:38:04.829455 4713 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.830499 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:04 crc kubenswrapper[4713]: W0126 15:38:04.871277 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-ed3e47ad186903711e25ad8d24324562dbc3d4e41453ea57f6f6f2f9811b1923 WatchSource:0}: Error finding container ed3e47ad186903711e25ad8d24324562dbc3d4e41453ea57f6f6f2f9811b1923: Status 404 returned error can't find the container with id ed3e47ad186903711e25ad8d24324562dbc3d4e41453ea57f6f6f2f9811b1923 Jan 26 15:38:04 crc kubenswrapper[4713]: I0126 15:38:04.906649 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ed3e47ad186903711e25ad8d24324562dbc3d4e41453ea57f6f6f2f9811b1923"} Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.803831 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.813349 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.813744 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.813847 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.814418 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.814973 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.815423 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.815839 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.816336 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.817080 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.817829 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.818698 4713 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.819510 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.930563 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.930666 4713 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5" exitCode=1 Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.930747 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5"} Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.933249 4713 scope.go:117] "RemoveContainer" containerID="9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.934381 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.935787 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.936238 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.936414 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.936632 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.936848 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.937047 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.937229 4713 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.937503 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.937714 4713 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.937911 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:05 crc kubenswrapper[4713]: I0126 15:38:05.938156 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:06 crc kubenswrapper[4713]: E0126 15:38:06.237767 4713 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 26 15:38:06 crc kubenswrapper[4713]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-ccc74cc7-5w8hc_openshift-authentication_83f3415e-59a8-40a1-b6ac-77bdc12a3368_0(9ad46d48db73f45be6eaabb9b61751f308c0e0a9ed14afebb314a866cad49b3d): error adding pod openshift-authentication_oauth-openshift-ccc74cc7-5w8hc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9ad46d48db73f45be6eaabb9b61751f308c0e0a9ed14afebb314a866cad49b3d" Netns:"/var/run/netns/543cfa8e-6d6b-4609-be7d-e59bf42bc2fd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-ccc74cc7-5w8hc;K8S_POD_INFRA_CONTAINER_ID=9ad46d48db73f45be6eaabb9b61751f308c0e0a9ed14afebb314a866cad49b3d;K8S_POD_UID=83f3415e-59a8-40a1-b6ac-77bdc12a3368" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc] networking: Multus: [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc/83f3415e-59a8-40a1-b6ac-77bdc12a3368]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-ccc74cc7-5w8hc?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:38:06 crc kubenswrapper[4713]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 15:38:06 crc kubenswrapper[4713]: > Jan 26 15:38:06 crc kubenswrapper[4713]: E0126 15:38:06.237921 4713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 26 15:38:06 crc kubenswrapper[4713]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-ccc74cc7-5w8hc_openshift-authentication_83f3415e-59a8-40a1-b6ac-77bdc12a3368_0(9ad46d48db73f45be6eaabb9b61751f308c0e0a9ed14afebb314a866cad49b3d): error adding pod openshift-authentication_oauth-openshift-ccc74cc7-5w8hc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9ad46d48db73f45be6eaabb9b61751f308c0e0a9ed14afebb314a866cad49b3d" Netns:"/var/run/netns/543cfa8e-6d6b-4609-be7d-e59bf42bc2fd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-ccc74cc7-5w8hc;K8S_POD_INFRA_CONTAINER_ID=9ad46d48db73f45be6eaabb9b61751f308c0e0a9ed14afebb314a866cad49b3d;K8S_POD_UID=83f3415e-59a8-40a1-b6ac-77bdc12a3368" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc] networking: Multus: [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc/83f3415e-59a8-40a1-b6ac-77bdc12a3368]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-ccc74cc7-5w8hc?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:38:06 crc kubenswrapper[4713]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 15:38:06 crc kubenswrapper[4713]: > pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:38:06 crc kubenswrapper[4713]: E0126 15:38:06.237965 4713 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 26 15:38:06 crc kubenswrapper[4713]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-ccc74cc7-5w8hc_openshift-authentication_83f3415e-59a8-40a1-b6ac-77bdc12a3368_0(9ad46d48db73f45be6eaabb9b61751f308c0e0a9ed14afebb314a866cad49b3d): error adding pod openshift-authentication_oauth-openshift-ccc74cc7-5w8hc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9ad46d48db73f45be6eaabb9b61751f308c0e0a9ed14afebb314a866cad49b3d" Netns:"/var/run/netns/543cfa8e-6d6b-4609-be7d-e59bf42bc2fd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-ccc74cc7-5w8hc;K8S_POD_INFRA_CONTAINER_ID=9ad46d48db73f45be6eaabb9b61751f308c0e0a9ed14afebb314a866cad49b3d;K8S_POD_UID=83f3415e-59a8-40a1-b6ac-77bdc12a3368" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc] networking: Multus: [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc/83f3415e-59a8-40a1-b6ac-77bdc12a3368]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-ccc74cc7-5w8hc?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Jan 26 15:38:06 crc kubenswrapper[4713]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 15:38:06 crc kubenswrapper[4713]: > pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:38:06 crc kubenswrapper[4713]: E0126 15:38:06.238040 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-ccc74cc7-5w8hc_openshift-authentication(83f3415e-59a8-40a1-b6ac-77bdc12a3368)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-ccc74cc7-5w8hc_openshift-authentication(83f3415e-59a8-40a1-b6ac-77bdc12a3368)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-ccc74cc7-5w8hc_openshift-authentication_83f3415e-59a8-40a1-b6ac-77bdc12a3368_0(9ad46d48db73f45be6eaabb9b61751f308c0e0a9ed14afebb314a866cad49b3d): error adding pod openshift-authentication_oauth-openshift-ccc74cc7-5w8hc to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"9ad46d48db73f45be6eaabb9b61751f308c0e0a9ed14afebb314a866cad49b3d\\\" Netns:\\\"/var/run/netns/543cfa8e-6d6b-4609-be7d-e59bf42bc2fd\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-ccc74cc7-5w8hc;K8S_POD_INFRA_CONTAINER_ID=9ad46d48db73f45be6eaabb9b61751f308c0e0a9ed14afebb314a866cad49b3d;K8S_POD_UID=83f3415e-59a8-40a1-b6ac-77bdc12a3368\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc] networking: Multus: [openshift-authentication/oauth-openshift-ccc74cc7-5w8hc/83f3415e-59a8-40a1-b6ac-77bdc12a3368]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-ccc74cc7-5w8hc in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-ccc74cc7-5w8hc?timeout=1m0s\\\": dial tcp 38.102.83.194:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" podUID="83f3415e-59a8-40a1-b6ac-77bdc12a3368" Jan 26 15:38:06 crc kubenswrapper[4713]: I0126 15:38:06.820181 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:38:06 crc kubenswrapper[4713]: I0126 15:38:06.940813 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"26bb3a8295b0f50f4c3377daa8febf829ac6f37da6cdf10307908ce314b6d77d"} Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.963962 4713 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="26bb3a8295b0f50f4c3377daa8febf829ac6f37da6cdf10307908ce314b6d77d" exitCode=0 Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.964822 4713 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="93543424-4011-4a77-a471-5f0ef9989535" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.964841 4713 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="93543424-4011-4a77-a471-5f0ef9989535" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.965347 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"26bb3a8295b0f50f4c3377daa8febf829ac6f37da6cdf10307908ce314b6d77d"} Jan 26 15:38:07 crc kubenswrapper[4713]: E0126 15:38:07.965629 4713 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.965855 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.966647 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.967326 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.967708 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.968258 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.969191 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.970214 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.970596 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.971167 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.971592 4713 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.972043 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.972300 4713 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.973125 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.973253 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c889d3c3ebbb38a70e9ca163f5bc15594006ce07b6a354f5fb92e65c9347699f"} Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.991507 4713 status_manager.go:851] "Failed to get status for pod" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" pod="openshift-marketplace/redhat-operators-jpzjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jpzjd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.992249 4713 status_manager.go:851] "Failed to get status for pod" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" pod="openshift-marketplace/certified-operators-4j4cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4j4cb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.992826 4713 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.993258 4713 status_manager.go:851] "Failed to get status for pod" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" pod="openshift-marketplace/redhat-operators-pvkg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvkg2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.993619 4713 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.993915 4713 status_manager.go:851] "Failed to get status for pod" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" pod="openshift-marketplace/redhat-marketplace-5mg77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5mg77\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.994223 4713 status_manager.go:851] "Failed to get status for pod" podUID="d7259d39-ff96-407d-b595-119128ba5677" pod="openshift-marketplace/redhat-marketplace-dpdqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpdqx\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.994547 4713 status_manager.go:851] "Failed to get status for pod" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" pod="openshift-marketplace/community-operators-x29hb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-x29hb\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.994852 4713 status_manager.go:851] "Failed to get status for pod" podUID="02195b48-5845-4f33-861e-e6527590c4d9" pod="openshift-marketplace/certified-operators-cbs5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cbs5g\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.995136 4713 status_manager.go:851] "Failed to get status for pod" podUID="36284b41-4184-472e-967c-f0345cf1ae81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.995499 4713 status_manager.go:851] "Failed to get status for pod" podUID="34325b63-2012-4f82-8860-c88e2847683b" pod="openshift-marketplace/community-operators-jd4ff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jd4ff\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:07 crc kubenswrapper[4713]: I0126 15:38:07.995775 4713 status_manager.go:851] "Failed to get status for pod" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-tn7l2\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:08 crc kubenswrapper[4713]: E0126 15:38:08.199270 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:38:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:38:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:38:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:38:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:024b1ed0676c2e11f6a319392c82e7acd0ceeae31ca00b202307c4d86a796b20\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ada03173793960eaa0e4263282fcbf5af3dea8aaf2c3b0d864906108db062e8a\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1672061160},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1203425009},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:baf4eb931aab99ddd36e09d79f76ea1128c2ef536e95b78edb9af73175db2be3\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:dfb030ab67faacd3572a0cae805bd05f041ba6a589cf6fb289cb2295f364c580\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1183907051},{\\\"names\\\":[],\\\"sizeBytes\\\":1179648738},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:08 crc kubenswrapper[4713]: E0126 15:38:08.199950 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:08 crc kubenswrapper[4713]: E0126 15:38:08.200469 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:08 crc kubenswrapper[4713]: E0126 15:38:08.200893 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:08 crc kubenswrapper[4713]: E0126 15:38:08.201750 4713 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 26 15:38:08 crc kubenswrapper[4713]: E0126 15:38:08.201777 4713 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:38:08 crc kubenswrapper[4713]: I0126 15:38:08.986659 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"450770bdef7d30915b634801f3e557f23af8ceb4c4a6950bfc559fa8c20ddb2f"} Jan 26 15:38:08 crc kubenswrapper[4713]: I0126 15:38:08.987598 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2d8c71b41a80bbc1f56cb1c6fa75886906eb838e548949a8dfbb90780e409347"} Jan 26 15:38:08 crc kubenswrapper[4713]: I0126 15:38:08.987622 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6f4a635748f7c48a5d9d3b77c832b002895e17f460305f605ad12dc510d61039"} Jan 26 15:38:09 crc kubenswrapper[4713]: I0126 15:38:09.996870 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"22565b8f1d60bacde44bb32b5dc465829560f6e428f5e772b18c322a4f425837"} Jan 26 15:38:09 crc kubenswrapper[4713]: I0126 15:38:09.996966 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b41b939d5be7c5cc9df4de9ec9ec9198d2ace7c7dfde73540014687eb77a9fe1"} Jan 26 15:38:09 crc kubenswrapper[4713]: I0126 15:38:09.997129 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:09 crc kubenswrapper[4713]: I0126 15:38:09.997370 4713 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="93543424-4011-4a77-a471-5f0ef9989535" Jan 26 15:38:09 crc kubenswrapper[4713]: I0126 15:38:09.997413 4713 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="93543424-4011-4a77-a471-5f0ef9989535" Jan 26 15:38:11 crc kubenswrapper[4713]: I0126 15:38:11.897006 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:38:14 crc kubenswrapper[4713]: I0126 15:38:14.831329 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:14 crc kubenswrapper[4713]: I0126 15:38:14.832127 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:14 crc kubenswrapper[4713]: I0126 15:38:14.840290 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:15 crc kubenswrapper[4713]: I0126 15:38:15.015322 4713 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:15 crc kubenswrapper[4713]: I0126 15:38:15.189531 4713 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="eb257fc4-c8a8-49c5-90e8-077e80a01b4d" Jan 26 15:38:16 crc kubenswrapper[4713]: I0126 15:38:16.043258 4713 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="93543424-4011-4a77-a471-5f0ef9989535" Jan 26 15:38:16 crc kubenswrapper[4713]: I0126 15:38:16.043312 4713 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="93543424-4011-4a77-a471-5f0ef9989535" Jan 26 15:38:16 crc kubenswrapper[4713]: I0126 15:38:16.047554 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:16 crc kubenswrapper[4713]: I0126 15:38:16.047812 4713 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="eb257fc4-c8a8-49c5-90e8-077e80a01b4d" Jan 26 15:38:16 crc kubenswrapper[4713]: I0126 15:38:16.819890 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:38:16 crc kubenswrapper[4713]: I0126 15:38:16.820033 4713 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 15:38:16 crc kubenswrapper[4713]: I0126 15:38:16.820087 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 15:38:17 crc kubenswrapper[4713]: I0126 15:38:17.050377 4713 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="93543424-4011-4a77-a471-5f0ef9989535" Jan 26 15:38:17 crc kubenswrapper[4713]: I0126 15:38:17.050420 4713 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="93543424-4011-4a77-a471-5f0ef9989535" Jan 26 15:38:17 crc kubenswrapper[4713]: I0126 15:38:17.054259 4713 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="eb257fc4-c8a8-49c5-90e8-077e80a01b4d" Jan 26 15:38:18 crc kubenswrapper[4713]: I0126 15:38:18.802962 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:38:18 crc kubenswrapper[4713]: I0126 15:38:18.804114 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:38:20 crc kubenswrapper[4713]: I0126 15:38:20.071984 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" event={"ID":"83f3415e-59a8-40a1-b6ac-77bdc12a3368","Type":"ContainerStarted","Data":"264e8f4f02933c4017577b29698ef31da04df78351980e9a1b1fbaaf9a7ae618"} Jan 26 15:38:20 crc kubenswrapper[4713]: I0126 15:38:20.072639 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" event={"ID":"83f3415e-59a8-40a1-b6ac-77bdc12a3368","Type":"ContainerStarted","Data":"a49f164acc8aee222cdf47ce4cfa08eac7a24ef663b77d33af0fc66d289687bb"} Jan 26 15:38:20 crc kubenswrapper[4713]: I0126 15:38:20.073122 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:38:20 crc kubenswrapper[4713]: I0126 15:38:20.294983 4713 patch_prober.go:28] interesting pod/oauth-openshift-ccc74cc7-5w8hc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:36026->10.217.0.56:6443: read: connection reset by peer" start-of-body= Jan 26 15:38:20 crc kubenswrapper[4713]: I0126 15:38:20.295045 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" podUID="83f3415e-59a8-40a1-b6ac-77bdc12a3368" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:36026->10.217.0.56:6443: read: connection reset by peer" Jan 26 15:38:21 crc kubenswrapper[4713]: I0126 15:38:21.080650 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-ccc74cc7-5w8hc_83f3415e-59a8-40a1-b6ac-77bdc12a3368/oauth-openshift/0.log" Jan 26 15:38:21 crc kubenswrapper[4713]: I0126 15:38:21.081157 4713 generic.go:334] "Generic (PLEG): container finished" podID="83f3415e-59a8-40a1-b6ac-77bdc12a3368" containerID="264e8f4f02933c4017577b29698ef31da04df78351980e9a1b1fbaaf9a7ae618" exitCode=255 Jan 26 15:38:21 crc kubenswrapper[4713]: I0126 15:38:21.081213 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" event={"ID":"83f3415e-59a8-40a1-b6ac-77bdc12a3368","Type":"ContainerDied","Data":"264e8f4f02933c4017577b29698ef31da04df78351980e9a1b1fbaaf9a7ae618"} Jan 26 15:38:21 crc kubenswrapper[4713]: I0126 15:38:21.081962 4713 scope.go:117] "RemoveContainer" containerID="264e8f4f02933c4017577b29698ef31da04df78351980e9a1b1fbaaf9a7ae618" Jan 26 15:38:22 crc kubenswrapper[4713]: I0126 15:38:22.097177 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-ccc74cc7-5w8hc_83f3415e-59a8-40a1-b6ac-77bdc12a3368/oauth-openshift/1.log" Jan 26 15:38:22 crc kubenswrapper[4713]: I0126 15:38:22.097757 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-ccc74cc7-5w8hc_83f3415e-59a8-40a1-b6ac-77bdc12a3368/oauth-openshift/0.log" Jan 26 15:38:22 crc kubenswrapper[4713]: I0126 15:38:22.097821 4713 generic.go:334] "Generic (PLEG): container finished" podID="83f3415e-59a8-40a1-b6ac-77bdc12a3368" containerID="00397d0af97ec880f10d523177940b44491b5d554e0d9456cbc769e88c4bee27" exitCode=255 Jan 26 15:38:22 crc kubenswrapper[4713]: I0126 15:38:22.097856 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" event={"ID":"83f3415e-59a8-40a1-b6ac-77bdc12a3368","Type":"ContainerDied","Data":"00397d0af97ec880f10d523177940b44491b5d554e0d9456cbc769e88c4bee27"} Jan 26 15:38:22 crc kubenswrapper[4713]: I0126 15:38:22.097903 4713 scope.go:117] "RemoveContainer" containerID="264e8f4f02933c4017577b29698ef31da04df78351980e9a1b1fbaaf9a7ae618" Jan 26 15:38:22 crc kubenswrapper[4713]: I0126 15:38:22.098330 4713 scope.go:117] "RemoveContainer" containerID="00397d0af97ec880f10d523177940b44491b5d554e0d9456cbc769e88c4bee27" Jan 26 15:38:22 crc kubenswrapper[4713]: E0126 15:38:22.098736 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-ccc74cc7-5w8hc_openshift-authentication(83f3415e-59a8-40a1-b6ac-77bdc12a3368)\"" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" podUID="83f3415e-59a8-40a1-b6ac-77bdc12a3368" Jan 26 15:38:23 crc kubenswrapper[4713]: I0126 15:38:23.109070 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-ccc74cc7-5w8hc_83f3415e-59a8-40a1-b6ac-77bdc12a3368/oauth-openshift/1.log" Jan 26 15:38:23 crc kubenswrapper[4713]: I0126 15:38:23.112086 4713 scope.go:117] "RemoveContainer" containerID="00397d0af97ec880f10d523177940b44491b5d554e0d9456cbc769e88c4bee27" Jan 26 15:38:23 crc kubenswrapper[4713]: E0126 15:38:23.112510 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-ccc74cc7-5w8hc_openshift-authentication(83f3415e-59a8-40a1-b6ac-77bdc12a3368)\"" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" podUID="83f3415e-59a8-40a1-b6ac-77bdc12a3368" Jan 26 15:38:25 crc kubenswrapper[4713]: I0126 15:38:25.093715 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 15:38:25 crc kubenswrapper[4713]: I0126 15:38:25.466920 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 15:38:25 crc kubenswrapper[4713]: I0126 15:38:25.696613 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 15:38:25 crc kubenswrapper[4713]: I0126 15:38:25.699594 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 15:38:26 crc kubenswrapper[4713]: I0126 15:38:26.436323 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 15:38:26 crc kubenswrapper[4713]: I0126 15:38:26.624625 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 15:38:26 crc kubenswrapper[4713]: I0126 15:38:26.820215 4713 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 15:38:26 crc kubenswrapper[4713]: I0126 15:38:26.820643 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 15:38:26 crc kubenswrapper[4713]: I0126 15:38:26.829984 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 15:38:26 crc kubenswrapper[4713]: I0126 15:38:26.865319 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 15:38:27 crc kubenswrapper[4713]: I0126 15:38:27.053072 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 15:38:27 crc kubenswrapper[4713]: I0126 15:38:27.071705 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 15:38:27 crc kubenswrapper[4713]: I0126 15:38:27.163639 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 15:38:27 crc kubenswrapper[4713]: I0126 15:38:27.510175 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 15:38:27 crc kubenswrapper[4713]: I0126 15:38:27.605133 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 15:38:27 crc kubenswrapper[4713]: I0126 15:38:27.820291 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 15:38:27 crc kubenswrapper[4713]: I0126 15:38:27.933664 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 15:38:27 crc kubenswrapper[4713]: I0126 15:38:27.977391 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 15:38:27 crc kubenswrapper[4713]: I0126 15:38:27.989579 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 15:38:28 crc kubenswrapper[4713]: I0126 15:38:28.340735 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:38:28 crc kubenswrapper[4713]: I0126 15:38:28.340798 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:38:28 crc kubenswrapper[4713]: I0126 15:38:28.341569 4713 scope.go:117] "RemoveContainer" containerID="00397d0af97ec880f10d523177940b44491b5d554e0d9456cbc769e88c4bee27" Jan 26 15:38:28 crc kubenswrapper[4713]: E0126 15:38:28.342016 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-ccc74cc7-5w8hc_openshift-authentication(83f3415e-59a8-40a1-b6ac-77bdc12a3368)\"" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" podUID="83f3415e-59a8-40a1-b6ac-77bdc12a3368" Jan 26 15:38:28 crc kubenswrapper[4713]: I0126 15:38:28.367307 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 15:38:28 crc kubenswrapper[4713]: I0126 15:38:28.395279 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 15:38:28 crc kubenswrapper[4713]: I0126 15:38:28.463167 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 15:38:28 crc kubenswrapper[4713]: I0126 15:38:28.600251 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 15:38:28 crc kubenswrapper[4713]: I0126 15:38:28.616049 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 15:38:28 crc kubenswrapper[4713]: I0126 15:38:28.670515 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 15:38:28 crc kubenswrapper[4713]: I0126 15:38:28.756102 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 15:38:28 crc kubenswrapper[4713]: I0126 15:38:28.803239 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 15:38:28 crc kubenswrapper[4713]: I0126 15:38:28.910753 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 15:38:29 crc kubenswrapper[4713]: I0126 15:38:29.059631 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 15:38:29 crc kubenswrapper[4713]: I0126 15:38:29.247910 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 15:38:29 crc kubenswrapper[4713]: I0126 15:38:29.307742 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 15:38:29 crc kubenswrapper[4713]: I0126 15:38:29.371847 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 15:38:29 crc kubenswrapper[4713]: I0126 15:38:29.390681 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 15:38:29 crc kubenswrapper[4713]: I0126 15:38:29.392435 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 15:38:29 crc kubenswrapper[4713]: I0126 15:38:29.418285 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 15:38:29 crc kubenswrapper[4713]: I0126 15:38:29.548613 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 15:38:29 crc kubenswrapper[4713]: I0126 15:38:29.646435 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 15:38:29 crc kubenswrapper[4713]: I0126 15:38:29.655755 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 15:38:29 crc kubenswrapper[4713]: I0126 15:38:29.740582 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 15:38:29 crc kubenswrapper[4713]: I0126 15:38:29.779128 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 15:38:29 crc kubenswrapper[4713]: I0126 15:38:29.782299 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 15:38:29 crc kubenswrapper[4713]: I0126 15:38:29.935786 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 15:38:29 crc kubenswrapper[4713]: I0126 15:38:29.961669 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.032692 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.138222 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.183572 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.223113 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.410876 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.431628 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.473441 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.615259 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.661765 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.665619 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.692575 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.795898 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.851488 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.926237 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.931555 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.956011 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 15:38:30 crc kubenswrapper[4713]: I0126 15:38:30.998621 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.014453 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.058973 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.175805 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.195332 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.198481 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.304794 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.712783 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.712916 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.712805 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.713049 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.713084 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.713216 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.713274 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.713646 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.799018 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 15:38:31 crc kubenswrapper[4713]: I0126 15:38:31.907341 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 15:38:32 crc kubenswrapper[4713]: I0126 15:38:32.015762 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 15:38:32 crc kubenswrapper[4713]: I0126 15:38:32.042174 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 15:38:32 crc kubenswrapper[4713]: I0126 15:38:32.053426 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 15:38:32 crc kubenswrapper[4713]: I0126 15:38:32.242959 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 15:38:32 crc kubenswrapper[4713]: I0126 15:38:32.251844 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 15:38:32 crc kubenswrapper[4713]: I0126 15:38:32.334711 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 15:38:32 crc kubenswrapper[4713]: I0126 15:38:32.393548 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 15:38:32 crc kubenswrapper[4713]: I0126 15:38:32.399113 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 15:38:32 crc kubenswrapper[4713]: I0126 15:38:32.458151 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 15:38:32 crc kubenswrapper[4713]: I0126 15:38:32.506108 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 15:38:32 crc kubenswrapper[4713]: I0126 15:38:32.560543 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 15:38:32 crc kubenswrapper[4713]: I0126 15:38:32.623904 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 15:38:33 crc kubenswrapper[4713]: I0126 15:38:33.030672 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 15:38:33 crc kubenswrapper[4713]: I0126 15:38:33.037731 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 15:38:33 crc kubenswrapper[4713]: I0126 15:38:33.072788 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 15:38:33 crc kubenswrapper[4713]: I0126 15:38:33.154255 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 15:38:33 crc kubenswrapper[4713]: I0126 15:38:33.192705 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 15:38:33 crc kubenswrapper[4713]: I0126 15:38:33.471045 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 15:38:33 crc kubenswrapper[4713]: I0126 15:38:33.522651 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 15:38:33 crc kubenswrapper[4713]: I0126 15:38:33.609043 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 15:38:33 crc kubenswrapper[4713]: I0126 15:38:33.672303 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 15:38:33 crc kubenswrapper[4713]: I0126 15:38:33.706145 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 15:38:33 crc kubenswrapper[4713]: I0126 15:38:33.919810 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 15:38:33 crc kubenswrapper[4713]: I0126 15:38:33.992471 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.118942 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.123745 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.186499 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.300061 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.351546 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.357083 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.395838 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.401470 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.427118 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.622106 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.625027 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.636457 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.688931 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.697559 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.724889 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.774711 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.784863 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.801699 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.884055 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.905080 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.943635 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 15:38:34 crc kubenswrapper[4713]: I0126 15:38:34.943916 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 15:38:35 crc kubenswrapper[4713]: I0126 15:38:35.165497 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 15:38:35 crc kubenswrapper[4713]: I0126 15:38:35.174619 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 15:38:35 crc kubenswrapper[4713]: I0126 15:38:35.182634 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 15:38:35 crc kubenswrapper[4713]: I0126 15:38:35.186591 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 15:38:35 crc kubenswrapper[4713]: I0126 15:38:35.257847 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 15:38:35 crc kubenswrapper[4713]: I0126 15:38:35.501935 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 15:38:35 crc kubenswrapper[4713]: I0126 15:38:35.656396 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 15:38:35 crc kubenswrapper[4713]: I0126 15:38:35.690860 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 15:38:35 crc kubenswrapper[4713]: I0126 15:38:35.757976 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 15:38:35 crc kubenswrapper[4713]: I0126 15:38:35.897127 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 15:38:35 crc kubenswrapper[4713]: I0126 15:38:35.905047 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 15:38:35 crc kubenswrapper[4713]: I0126 15:38:35.922023 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.033937 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.047214 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.064346 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.074296 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.133029 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.135994 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.263833 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.266001 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.279061 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.371072 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.397756 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.410879 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.652581 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.685900 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.692420 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.706202 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.716816 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.792633 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.814600 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.820445 4713 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.820503 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.820561 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.821138 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"c889d3c3ebbb38a70e9ca163f5bc15594006ce07b6a354f5fb92e65c9347699f"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.821266 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://c889d3c3ebbb38a70e9ca163f5bc15594006ce07b6a354f5fb92e65c9347699f" gracePeriod=30 Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.901502 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 15:38:36 crc kubenswrapper[4713]: I0126 15:38:36.943072 4713 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.198600 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.208973 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.233424 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.285696 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.328493 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.340322 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.365697 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.407201 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.475436 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.629673 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.642525 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.651697 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.711040 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.787709 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.880455 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.974653 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 15:38:37 crc kubenswrapper[4713]: I0126 15:38:37.994111 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.136945 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.244646 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.271985 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.411396 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.419412 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.486534 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.488616 4713 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.578733 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.607873 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.636386 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.640448 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.715736 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.732235 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.757452 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.829822 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.901561 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 15:38:38 crc kubenswrapper[4713]: I0126 15:38:38.993287 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 15:38:39 crc kubenswrapper[4713]: I0126 15:38:39.086168 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 15:38:39 crc kubenswrapper[4713]: I0126 15:38:39.411199 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 15:38:39 crc kubenswrapper[4713]: I0126 15:38:39.484591 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 15:38:39 crc kubenswrapper[4713]: I0126 15:38:39.945164 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 15:38:40 crc kubenswrapper[4713]: I0126 15:38:40.166497 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 15:38:40 crc kubenswrapper[4713]: I0126 15:38:40.255496 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 15:38:40 crc kubenswrapper[4713]: I0126 15:38:40.304332 4713 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 15:38:40 crc kubenswrapper[4713]: I0126 15:38:40.531618 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 15:38:40 crc kubenswrapper[4713]: I0126 15:38:40.806515 4713 scope.go:117] "RemoveContainer" containerID="00397d0af97ec880f10d523177940b44491b5d554e0d9456cbc769e88c4bee27" Jan 26 15:38:41 crc kubenswrapper[4713]: I0126 15:38:41.126296 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 15:38:41 crc kubenswrapper[4713]: I0126 15:38:41.218654 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-ccc74cc7-5w8hc_83f3415e-59a8-40a1-b6ac-77bdc12a3368/oauth-openshift/1.log" Jan 26 15:38:41 crc kubenswrapper[4713]: I0126 15:38:41.218745 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" event={"ID":"83f3415e-59a8-40a1-b6ac-77bdc12a3368","Type":"ContainerStarted","Data":"243d09f59881f92f910c87577762c9734779102f28d90472821e1a8116698165"} Jan 26 15:38:41 crc kubenswrapper[4713]: I0126 15:38:41.219178 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:38:41 crc kubenswrapper[4713]: I0126 15:38:41.394845 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" Jan 26 15:38:45 crc kubenswrapper[4713]: I0126 15:38:45.646856 4713 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 15:38:46 crc kubenswrapper[4713]: I0126 15:38:46.416655 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 15:38:49 crc kubenswrapper[4713]: I0126 15:38:49.593015 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 15:38:50 crc kubenswrapper[4713]: I0126 15:38:50.029237 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 15:38:50 crc kubenswrapper[4713]: I0126 15:38:50.241207 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 15:38:51 crc kubenswrapper[4713]: I0126 15:38:51.052223 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 15:38:52 crc kubenswrapper[4713]: I0126 15:38:52.749104 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 15:38:53 crc kubenswrapper[4713]: I0126 15:38:53.736512 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 15:38:54 crc kubenswrapper[4713]: I0126 15:38:54.428264 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 15:38:55 crc kubenswrapper[4713]: I0126 15:38:55.351838 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 15:38:55 crc kubenswrapper[4713]: I0126 15:38:55.511313 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 15:38:57 crc kubenswrapper[4713]: I0126 15:38:57.353307 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 15:38:57 crc kubenswrapper[4713]: I0126 15:38:57.546289 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 15:38:57 crc kubenswrapper[4713]: I0126 15:38:57.875675 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 15:38:57 crc kubenswrapper[4713]: I0126 15:38:57.972335 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 15:38:58 crc kubenswrapper[4713]: I0126 15:38:58.169613 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 15:38:58 crc kubenswrapper[4713]: I0126 15:38:58.342604 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 15:38:58 crc kubenswrapper[4713]: I0126 15:38:58.391230 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 15:38:58 crc kubenswrapper[4713]: I0126 15:38:58.401563 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 15:38:59 crc kubenswrapper[4713]: I0126 15:38:59.119645 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 15:38:59 crc kubenswrapper[4713]: I0126 15:38:59.119987 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 15:38:59 crc kubenswrapper[4713]: I0126 15:38:59.361861 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 15:38:59 crc kubenswrapper[4713]: I0126 15:38:59.856910 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 15:39:01 crc kubenswrapper[4713]: I0126 15:39:01.465934 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 15:39:01 crc kubenswrapper[4713]: I0126 15:39:01.874965 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 15:39:02 crc kubenswrapper[4713]: I0126 15:39:02.672031 4713 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 15:39:02 crc kubenswrapper[4713]: I0126 15:39:02.718843 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 15:39:02 crc kubenswrapper[4713]: I0126 15:39:02.751074 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 15:39:02 crc kubenswrapper[4713]: I0126 15:39:02.899930 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 15:39:03 crc kubenswrapper[4713]: I0126 15:39:03.576697 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 15:39:04 crc kubenswrapper[4713]: I0126 15:39:04.619418 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 15:39:05 crc kubenswrapper[4713]: I0126 15:39:05.315719 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 15:39:05 crc kubenswrapper[4713]: I0126 15:39:05.970675 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 15:39:06 crc kubenswrapper[4713]: I0126 15:39:06.723349 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 15:39:06 crc kubenswrapper[4713]: I0126 15:39:06.770445 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 15:39:07 crc kubenswrapper[4713]: I0126 15:39:07.289491 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 15:39:07 crc kubenswrapper[4713]: I0126 15:39:07.350211 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 15:39:07 crc kubenswrapper[4713]: I0126 15:39:07.352729 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 15:39:07 crc kubenswrapper[4713]: I0126 15:39:07.419432 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 15:39:07 crc kubenswrapper[4713]: I0126 15:39:07.422222 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 15:39:07 crc kubenswrapper[4713]: I0126 15:39:07.422310 4713 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c889d3c3ebbb38a70e9ca163f5bc15594006ce07b6a354f5fb92e65c9347699f" exitCode=137 Jan 26 15:39:07 crc kubenswrapper[4713]: I0126 15:39:07.422383 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c889d3c3ebbb38a70e9ca163f5bc15594006ce07b6a354f5fb92e65c9347699f"} Jan 26 15:39:07 crc kubenswrapper[4713]: I0126 15:39:07.422463 4713 scope.go:117] "RemoveContainer" containerID="9b05b32056dd807ebca7717762c7d06ddd9787e2402b42d1ab27c196390272a5" Jan 26 15:39:07 crc kubenswrapper[4713]: I0126 15:39:07.527420 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.323842 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.331936 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.431031 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.432254 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1eb30ac222cdecb18b4a313c866fdb3558e9ed67e55d924527d1d9afc9a1b55c"} Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.529014 4713 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.529899 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-ccc74cc7-5w8hc" podStartSLOduration=118.529881709 podStartE2EDuration="1m58.529881709s" podCreationTimestamp="2026-01-26 15:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:38:20.096562984 +0000 UTC m=+275.233580219" watchObservedRunningTime="2026-01-26 15:39:08.529881709 +0000 UTC m=+323.666898954" Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.530903 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jd4ff" podStartSLOduration=79.075961664 podStartE2EDuration="2m48.53089556s" podCreationTimestamp="2026-01-26 15:36:20 +0000 UTC" firstStartedPulling="2026-01-26 15:36:23.33364216 +0000 UTC m=+158.470659395" lastFinishedPulling="2026-01-26 15:37:52.788576056 +0000 UTC m=+247.925593291" observedRunningTime="2026-01-26 15:38:15.109806415 +0000 UTC m=+270.246823650" watchObservedRunningTime="2026-01-26 15:39:08.53089556 +0000 UTC m=+323.667912805" Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.531006 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pvkg2" podStartSLOduration=82.300637486 podStartE2EDuration="2m45.531002524s" podCreationTimestamp="2026-01-26 15:36:23 +0000 UTC" firstStartedPulling="2026-01-26 15:36:26.595224563 +0000 UTC m=+161.732241798" lastFinishedPulling="2026-01-26 15:37:49.825589601 +0000 UTC m=+244.962606836" observedRunningTime="2026-01-26 15:38:15.206775593 +0000 UTC m=+270.343792828" watchObservedRunningTime="2026-01-26 15:39:08.531002524 +0000 UTC m=+323.668019769" Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.531322 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dpdqx" podStartSLOduration=79.309929262 podStartE2EDuration="2m47.531319393s" podCreationTimestamp="2026-01-26 15:36:21 +0000 UTC" firstStartedPulling="2026-01-26 15:36:24.485476325 +0000 UTC m=+159.622493560" lastFinishedPulling="2026-01-26 15:37:52.706866456 +0000 UTC m=+247.843883691" observedRunningTime="2026-01-26 15:38:15.05134124 +0000 UTC m=+270.188358475" watchObservedRunningTime="2026-01-26 15:39:08.531319393 +0000 UTC m=+323.668336638" Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.531542 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cbs5g" podStartSLOduration=79.71682477 podStartE2EDuration="2m48.53153852s" podCreationTimestamp="2026-01-26 15:36:20 +0000 UTC" firstStartedPulling="2026-01-26 15:36:23.353836274 +0000 UTC m=+158.490853509" lastFinishedPulling="2026-01-26 15:37:52.168550024 +0000 UTC m=+247.305567259" observedRunningTime="2026-01-26 15:38:15.08943069 +0000 UTC m=+270.226447925" watchObservedRunningTime="2026-01-26 15:39:08.53153852 +0000 UTC m=+323.668555765" Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.531628 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jpzjd" podStartSLOduration=81.431181794 podStartE2EDuration="2m45.531625863s" podCreationTimestamp="2026-01-26 15:36:23 +0000 UTC" firstStartedPulling="2026-01-26 15:36:25.49868968 +0000 UTC m=+160.635706915" lastFinishedPulling="2026-01-26 15:37:49.599133739 +0000 UTC m=+244.736150984" observedRunningTime="2026-01-26 15:38:15.169711685 +0000 UTC m=+270.306728920" watchObservedRunningTime="2026-01-26 15:39:08.531625863 +0000 UTC m=+323.668643108" Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.532297 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5mg77" podStartSLOduration=79.3310526 podStartE2EDuration="2m46.532293963s" podCreationTimestamp="2026-01-26 15:36:22 +0000 UTC" firstStartedPulling="2026-01-26 15:36:25.510566118 +0000 UTC m=+160.647583353" lastFinishedPulling="2026-01-26 15:37:52.711807491 +0000 UTC m=+247.848824716" observedRunningTime="2026-01-26 15:38:15.23990063 +0000 UTC m=+270.376917865" watchObservedRunningTime="2026-01-26 15:39:08.532293963 +0000 UTC m=+323.669311208" Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.533493 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x29hb" podStartSLOduration=78.947954603 podStartE2EDuration="2m48.53348812s" podCreationTimestamp="2026-01-26 15:36:20 +0000 UTC" firstStartedPulling="2026-01-26 15:36:23.421665471 +0000 UTC m=+158.558682706" lastFinishedPulling="2026-01-26 15:37:53.007198978 +0000 UTC m=+248.144216223" observedRunningTime="2026-01-26 15:38:15.0698995 +0000 UTC m=+270.206916735" watchObservedRunningTime="2026-01-26 15:39:08.53348812 +0000 UTC m=+323.670505365" Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.533671 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4j4cb" podStartSLOduration=83.144433386 podStartE2EDuration="2m49.533667945s" podCreationTimestamp="2026-01-26 15:36:19 +0000 UTC" firstStartedPulling="2026-01-26 15:36:23.359891626 +0000 UTC m=+158.496908861" lastFinishedPulling="2026-01-26 15:37:49.749126175 +0000 UTC m=+244.886143420" observedRunningTime="2026-01-26 15:38:15.1871498 +0000 UTC m=+270.324167035" watchObservedRunningTime="2026-01-26 15:39:08.533667945 +0000 UTC m=+323.670685190" Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.534528 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.534564 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.534592 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-ccc74cc7-5w8hc"] Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.561005 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=53.560980234 podStartE2EDuration="53.560980234s" podCreationTimestamp="2026-01-26 15:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:39:08.555412463 +0000 UTC m=+323.692429738" watchObservedRunningTime="2026-01-26 15:39:08.560980234 +0000 UTC m=+323.697997479" Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.573682 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=7.573664484 podStartE2EDuration="7.573664484s" podCreationTimestamp="2026-01-26 15:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:39:08.569297759 +0000 UTC m=+323.706315004" watchObservedRunningTime="2026-01-26 15:39:08.573664484 +0000 UTC m=+323.710681729" Jan 26 15:39:08 crc kubenswrapper[4713]: I0126 15:39:08.649142 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 15:39:09 crc kubenswrapper[4713]: I0126 15:39:09.223611 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 15:39:10 crc kubenswrapper[4713]: I0126 15:39:10.227736 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 15:39:10 crc kubenswrapper[4713]: I0126 15:39:10.491825 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 15:39:10 crc kubenswrapper[4713]: I0126 15:39:10.548277 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 15:39:10 crc kubenswrapper[4713]: I0126 15:39:10.611206 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 15:39:11 crc kubenswrapper[4713]: I0126 15:39:11.104930 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 15:39:11 crc kubenswrapper[4713]: I0126 15:39:11.378857 4713 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 15:39:11 crc kubenswrapper[4713]: I0126 15:39:11.897268 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:39:12 crc kubenswrapper[4713]: I0126 15:39:12.456315 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 15:39:12 crc kubenswrapper[4713]: I0126 15:39:12.899281 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 15:39:13 crc kubenswrapper[4713]: I0126 15:39:13.523139 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 15:39:14 crc kubenswrapper[4713]: I0126 15:39:14.837921 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:39:15 crc kubenswrapper[4713]: I0126 15:39:15.565208 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 15:39:16 crc kubenswrapper[4713]: I0126 15:39:16.820007 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:39:16 crc kubenswrapper[4713]: I0126 15:39:16.826170 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:39:17 crc kubenswrapper[4713]: I0126 15:39:17.495319 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:39:18 crc kubenswrapper[4713]: I0126 15:39:18.933480 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 15:39:19 crc kubenswrapper[4713]: I0126 15:39:19.617321 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 15:39:20 crc kubenswrapper[4713]: I0126 15:39:20.024473 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 15:39:23 crc kubenswrapper[4713]: I0126 15:39:23.019125 4713 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 15:39:23 crc kubenswrapper[4713]: I0126 15:39:23.019628 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c9b10bd378e1fdb773488fb955ab36931092f7ca76ee5ade74977351e582531e" gracePeriod=5 Jan 26 15:39:27 crc kubenswrapper[4713]: I0126 15:39:27.621462 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rcgql"] Jan 26 15:39:27 crc kubenswrapper[4713]: I0126 15:39:27.622032 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" podUID="99621db9-a20f-42b1-a788-a65ad55b6a52" containerName="controller-manager" containerID="cri-o://3267b474a914958a9a7705e6364ce71951bba78189ec25b6ddf9a8cbdbf39a61" gracePeriod=30 Jan 26 15:39:27 crc kubenswrapper[4713]: I0126 15:39:27.640995 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5"] Jan 26 15:39:27 crc kubenswrapper[4713]: I0126 15:39:27.641237 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" podUID="971b502e-8b71-404b-a7ca-58aa1894c648" containerName="route-controller-manager" containerID="cri-o://284c9b4b58910547f1170f8604462c3482da82a14c65d9f79c0e5afd9471f86e" gracePeriod=30 Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.474633 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.480319 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.565695 4713 generic.go:334] "Generic (PLEG): container finished" podID="971b502e-8b71-404b-a7ca-58aa1894c648" containerID="284c9b4b58910547f1170f8604462c3482da82a14c65d9f79c0e5afd9471f86e" exitCode=0 Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.565799 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" event={"ID":"971b502e-8b71-404b-a7ca-58aa1894c648","Type":"ContainerDied","Data":"284c9b4b58910547f1170f8604462c3482da82a14c65d9f79c0e5afd9471f86e"} Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.565861 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" event={"ID":"971b502e-8b71-404b-a7ca-58aa1894c648","Type":"ContainerDied","Data":"24bd32ce13ccc550ba78318d3b5968a4497fa748a510b36c32f83bc562d0b456"} Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.565883 4713 scope.go:117] "RemoveContainer" containerID="284c9b4b58910547f1170f8604462c3482da82a14c65d9f79c0e5afd9471f86e" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.566058 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.573708 4713 generic.go:334] "Generic (PLEG): container finished" podID="99621db9-a20f-42b1-a788-a65ad55b6a52" containerID="3267b474a914958a9a7705e6364ce71951bba78189ec25b6ddf9a8cbdbf39a61" exitCode=0 Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.573872 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.573948 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" event={"ID":"99621db9-a20f-42b1-a788-a65ad55b6a52","Type":"ContainerDied","Data":"3267b474a914958a9a7705e6364ce71951bba78189ec25b6ddf9a8cbdbf39a61"} Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.574023 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rcgql" event={"ID":"99621db9-a20f-42b1-a788-a65ad55b6a52","Type":"ContainerDied","Data":"18c0ff29c7068757529db809aa88dc778266dd5402a1dad0a9675e4d9c8060d7"} Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.576652 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.576737 4713 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c9b10bd378e1fdb773488fb955ab36931092f7ca76ee5ade74977351e582531e" exitCode=137 Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.589153 4713 scope.go:117] "RemoveContainer" containerID="284c9b4b58910547f1170f8604462c3482da82a14c65d9f79c0e5afd9471f86e" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.589677 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971b502e-8b71-404b-a7ca-58aa1894c648-config\") pod \"971b502e-8b71-404b-a7ca-58aa1894c648\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " Jan 26 15:39:28 crc kubenswrapper[4713]: E0126 15:39:28.589806 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"284c9b4b58910547f1170f8604462c3482da82a14c65d9f79c0e5afd9471f86e\": container with ID starting with 284c9b4b58910547f1170f8604462c3482da82a14c65d9f79c0e5afd9471f86e not found: ID does not exist" containerID="284c9b4b58910547f1170f8604462c3482da82a14c65d9f79c0e5afd9471f86e" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.589869 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284c9b4b58910547f1170f8604462c3482da82a14c65d9f79c0e5afd9471f86e"} err="failed to get container status \"284c9b4b58910547f1170f8604462c3482da82a14c65d9f79c0e5afd9471f86e\": rpc error: code = NotFound desc = could not find container \"284c9b4b58910547f1170f8604462c3482da82a14c65d9f79c0e5afd9471f86e\": container with ID starting with 284c9b4b58910547f1170f8604462c3482da82a14c65d9f79c0e5afd9471f86e not found: ID does not exist" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.589924 4713 scope.go:117] "RemoveContainer" containerID="3267b474a914958a9a7705e6364ce71951bba78189ec25b6ddf9a8cbdbf39a61" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.590043 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-config\") pod \"99621db9-a20f-42b1-a788-a65ad55b6a52\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.590142 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99621db9-a20f-42b1-a788-a65ad55b6a52-serving-cert\") pod \"99621db9-a20f-42b1-a788-a65ad55b6a52\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.590232 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/971b502e-8b71-404b-a7ca-58aa1894c648-client-ca\") pod \"971b502e-8b71-404b-a7ca-58aa1894c648\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.590333 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/971b502e-8b71-404b-a7ca-58aa1894c648-serving-cert\") pod \"971b502e-8b71-404b-a7ca-58aa1894c648\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.590553 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-227r2\" (UniqueName: \"kubernetes.io/projected/99621db9-a20f-42b1-a788-a65ad55b6a52-kube-api-access-227r2\") pod \"99621db9-a20f-42b1-a788-a65ad55b6a52\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.590680 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f6ln\" (UniqueName: \"kubernetes.io/projected/971b502e-8b71-404b-a7ca-58aa1894c648-kube-api-access-4f6ln\") pod \"971b502e-8b71-404b-a7ca-58aa1894c648\" (UID: \"971b502e-8b71-404b-a7ca-58aa1894c648\") " Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.590821 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-client-ca\") pod \"99621db9-a20f-42b1-a788-a65ad55b6a52\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.590928 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/971b502e-8b71-404b-a7ca-58aa1894c648-client-ca" (OuterVolumeSpecName: "client-ca") pod "971b502e-8b71-404b-a7ca-58aa1894c648" (UID: "971b502e-8b71-404b-a7ca-58aa1894c648"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.590941 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-proxy-ca-bundles\") pod \"99621db9-a20f-42b1-a788-a65ad55b6a52\" (UID: \"99621db9-a20f-42b1-a788-a65ad55b6a52\") " Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.591064 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/971b502e-8b71-404b-a7ca-58aa1894c648-config" (OuterVolumeSpecName: "config") pod "971b502e-8b71-404b-a7ca-58aa1894c648" (UID: "971b502e-8b71-404b-a7ca-58aa1894c648"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.591610 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971b502e-8b71-404b-a7ca-58aa1894c648-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.591635 4713 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/971b502e-8b71-404b-a7ca-58aa1894c648-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.591885 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-client-ca" (OuterVolumeSpecName: "client-ca") pod "99621db9-a20f-42b1-a788-a65ad55b6a52" (UID: "99621db9-a20f-42b1-a788-a65ad55b6a52"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.592078 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "99621db9-a20f-42b1-a788-a65ad55b6a52" (UID: "99621db9-a20f-42b1-a788-a65ad55b6a52"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.594578 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-config" (OuterVolumeSpecName: "config") pod "99621db9-a20f-42b1-a788-a65ad55b6a52" (UID: "99621db9-a20f-42b1-a788-a65ad55b6a52"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.596306 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99621db9-a20f-42b1-a788-a65ad55b6a52-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "99621db9-a20f-42b1-a788-a65ad55b6a52" (UID: "99621db9-a20f-42b1-a788-a65ad55b6a52"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.596777 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99621db9-a20f-42b1-a788-a65ad55b6a52-kube-api-access-227r2" (OuterVolumeSpecName: "kube-api-access-227r2") pod "99621db9-a20f-42b1-a788-a65ad55b6a52" (UID: "99621db9-a20f-42b1-a788-a65ad55b6a52"). InnerVolumeSpecName "kube-api-access-227r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.596817 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/971b502e-8b71-404b-a7ca-58aa1894c648-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "971b502e-8b71-404b-a7ca-58aa1894c648" (UID: "971b502e-8b71-404b-a7ca-58aa1894c648"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.596975 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/971b502e-8b71-404b-a7ca-58aa1894c648-kube-api-access-4f6ln" (OuterVolumeSpecName: "kube-api-access-4f6ln") pod "971b502e-8b71-404b-a7ca-58aa1894c648" (UID: "971b502e-8b71-404b-a7ca-58aa1894c648"). InnerVolumeSpecName "kube-api-access-4f6ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.606412 4713 scope.go:117] "RemoveContainer" containerID="3267b474a914958a9a7705e6364ce71951bba78189ec25b6ddf9a8cbdbf39a61" Jan 26 15:39:28 crc kubenswrapper[4713]: E0126 15:39:28.606909 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3267b474a914958a9a7705e6364ce71951bba78189ec25b6ddf9a8cbdbf39a61\": container with ID starting with 3267b474a914958a9a7705e6364ce71951bba78189ec25b6ddf9a8cbdbf39a61 not found: ID does not exist" containerID="3267b474a914958a9a7705e6364ce71951bba78189ec25b6ddf9a8cbdbf39a61" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.606967 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3267b474a914958a9a7705e6364ce71951bba78189ec25b6ddf9a8cbdbf39a61"} err="failed to get container status \"3267b474a914958a9a7705e6364ce71951bba78189ec25b6ddf9a8cbdbf39a61\": rpc error: code = NotFound desc = could not find container \"3267b474a914958a9a7705e6364ce71951bba78189ec25b6ddf9a8cbdbf39a61\": container with ID starting with 3267b474a914958a9a7705e6364ce71951bba78189ec25b6ddf9a8cbdbf39a61 not found: ID does not exist" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.693138 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-227r2\" (UniqueName: \"kubernetes.io/projected/99621db9-a20f-42b1-a788-a65ad55b6a52-kube-api-access-227r2\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.693174 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f6ln\" (UniqueName: \"kubernetes.io/projected/971b502e-8b71-404b-a7ca-58aa1894c648-kube-api-access-4f6ln\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.693184 4713 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.693194 4713 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.693203 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99621db9-a20f-42b1-a788-a65ad55b6a52-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.693211 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99621db9-a20f-42b1-a788-a65ad55b6a52-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.693219 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/971b502e-8b71-404b-a7ca-58aa1894c648-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.785994 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.786570 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.894388 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5"] Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.895988 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.896060 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.896111 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.896149 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.896171 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.896322 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.896379 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.896402 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.896658 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.903314 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.909296 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8r7k5"] Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.919504 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rcgql"] Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.923919 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rcgql"] Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.997341 4713 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.997401 4713 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.997415 4713 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.997424 4713 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:28 crc kubenswrapper[4713]: I0126 15:39:28.997436 4713 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.161162 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn"] Jan 26 15:39:29 crc kubenswrapper[4713]: E0126 15:39:29.162124 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971b502e-8b71-404b-a7ca-58aa1894c648" containerName="route-controller-manager" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.162278 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="971b502e-8b71-404b-a7ca-58aa1894c648" containerName="route-controller-manager" Jan 26 15:39:29 crc kubenswrapper[4713]: E0126 15:39:29.162442 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36284b41-4184-472e-967c-f0345cf1ae81" containerName="installer" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.164477 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="36284b41-4184-472e-967c-f0345cf1ae81" containerName="installer" Jan 26 15:39:29 crc kubenswrapper[4713]: E0126 15:39:29.165137 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.165259 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 15:39:29 crc kubenswrapper[4713]: E0126 15:39:29.165419 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99621db9-a20f-42b1-a788-a65ad55b6a52" containerName="controller-manager" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.165526 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="99621db9-a20f-42b1-a788-a65ad55b6a52" containerName="controller-manager" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.166273 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.166426 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="36284b41-4184-472e-967c-f0345cf1ae81" containerName="installer" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.166559 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="971b502e-8b71-404b-a7ca-58aa1894c648" containerName="route-controller-manager" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.166675 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="99621db9-a20f-42b1-a788-a65ad55b6a52" containerName="controller-manager" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.168173 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67dfb78b56-lmdnd"] Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.170336 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.172885 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.175683 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.176644 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn"] Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.179122 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.179412 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.179137 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.179993 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.180457 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.180818 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.181002 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.181980 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.182005 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.182749 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.183890 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.185976 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.188016 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67dfb78b56-lmdnd"] Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.303613 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd361ad5-372a-405c-a3a1-6104428a0137-proxy-ca-bundles\") pod \"controller-manager-67dfb78b56-lmdnd\" (UID: \"fd361ad5-372a-405c-a3a1-6104428a0137\") " pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.303660 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd361ad5-372a-405c-a3a1-6104428a0137-serving-cert\") pod \"controller-manager-67dfb78b56-lmdnd\" (UID: \"fd361ad5-372a-405c-a3a1-6104428a0137\") " pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.303689 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd361ad5-372a-405c-a3a1-6104428a0137-config\") pod \"controller-manager-67dfb78b56-lmdnd\" (UID: \"fd361ad5-372a-405c-a3a1-6104428a0137\") " pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.303736 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs4dz\" (UniqueName: \"kubernetes.io/projected/fd361ad5-372a-405c-a3a1-6104428a0137-kube-api-access-qs4dz\") pod \"controller-manager-67dfb78b56-lmdnd\" (UID: \"fd361ad5-372a-405c-a3a1-6104428a0137\") " pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.303769 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bndmd\" (UniqueName: \"kubernetes.io/projected/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-kube-api-access-bndmd\") pod \"route-controller-manager-588756b8c7-q86nn\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.303799 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd361ad5-372a-405c-a3a1-6104428a0137-client-ca\") pod \"controller-manager-67dfb78b56-lmdnd\" (UID: \"fd361ad5-372a-405c-a3a1-6104428a0137\") " pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.303831 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-client-ca\") pod \"route-controller-manager-588756b8c7-q86nn\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.303861 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-serving-cert\") pod \"route-controller-manager-588756b8c7-q86nn\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.303895 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-config\") pod \"route-controller-manager-588756b8c7-q86nn\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.405033 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-serving-cert\") pod \"route-controller-manager-588756b8c7-q86nn\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.405086 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-config\") pod \"route-controller-manager-588756b8c7-q86nn\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.405135 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd361ad5-372a-405c-a3a1-6104428a0137-proxy-ca-bundles\") pod \"controller-manager-67dfb78b56-lmdnd\" (UID: \"fd361ad5-372a-405c-a3a1-6104428a0137\") " pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.405163 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd361ad5-372a-405c-a3a1-6104428a0137-serving-cert\") pod \"controller-manager-67dfb78b56-lmdnd\" (UID: \"fd361ad5-372a-405c-a3a1-6104428a0137\") " pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.405191 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd361ad5-372a-405c-a3a1-6104428a0137-config\") pod \"controller-manager-67dfb78b56-lmdnd\" (UID: \"fd361ad5-372a-405c-a3a1-6104428a0137\") " pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.405230 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs4dz\" (UniqueName: \"kubernetes.io/projected/fd361ad5-372a-405c-a3a1-6104428a0137-kube-api-access-qs4dz\") pod \"controller-manager-67dfb78b56-lmdnd\" (UID: \"fd361ad5-372a-405c-a3a1-6104428a0137\") " pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.405260 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bndmd\" (UniqueName: \"kubernetes.io/projected/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-kube-api-access-bndmd\") pod \"route-controller-manager-588756b8c7-q86nn\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.405291 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd361ad5-372a-405c-a3a1-6104428a0137-client-ca\") pod \"controller-manager-67dfb78b56-lmdnd\" (UID: \"fd361ad5-372a-405c-a3a1-6104428a0137\") " pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.405320 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-client-ca\") pod \"route-controller-manager-588756b8c7-q86nn\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.406414 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd361ad5-372a-405c-a3a1-6104428a0137-proxy-ca-bundles\") pod \"controller-manager-67dfb78b56-lmdnd\" (UID: \"fd361ad5-372a-405c-a3a1-6104428a0137\") " pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.406428 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-client-ca\") pod \"route-controller-manager-588756b8c7-q86nn\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.406443 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd361ad5-372a-405c-a3a1-6104428a0137-client-ca\") pod \"controller-manager-67dfb78b56-lmdnd\" (UID: \"fd361ad5-372a-405c-a3a1-6104428a0137\") " pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.406996 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd361ad5-372a-405c-a3a1-6104428a0137-config\") pod \"controller-manager-67dfb78b56-lmdnd\" (UID: \"fd361ad5-372a-405c-a3a1-6104428a0137\") " pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.412208 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-serving-cert\") pod \"route-controller-manager-588756b8c7-q86nn\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.412276 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd361ad5-372a-405c-a3a1-6104428a0137-serving-cert\") pod \"controller-manager-67dfb78b56-lmdnd\" (UID: \"fd361ad5-372a-405c-a3a1-6104428a0137\") " pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.420817 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bndmd\" (UniqueName: \"kubernetes.io/projected/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-kube-api-access-bndmd\") pod \"route-controller-manager-588756b8c7-q86nn\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.421122 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-config\") pod \"route-controller-manager-588756b8c7-q86nn\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.423468 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs4dz\" (UniqueName: \"kubernetes.io/projected/fd361ad5-372a-405c-a3a1-6104428a0137-kube-api-access-qs4dz\") pod \"controller-manager-67dfb78b56-lmdnd\" (UID: \"fd361ad5-372a-405c-a3a1-6104428a0137\") " pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.496430 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.507719 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.605341 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.605729 4713 scope.go:117] "RemoveContainer" containerID="c9b10bd378e1fdb773488fb955ab36931092f7ca76ee5ade74977351e582531e" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.605843 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.720478 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67dfb78b56-lmdnd"] Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.769852 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn"] Jan 26 15:39:29 crc kubenswrapper[4713]: W0126 15:39:29.776054 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97f52d77_e5b4_4c4e_941e_bbc0fda897ba.slice/crio-1886cd56c13a0509b7eacd705cf9cac9cfaa1fdcfb33819a02888ec6afff8cec WatchSource:0}: Error finding container 1886cd56c13a0509b7eacd705cf9cac9cfaa1fdcfb33819a02888ec6afff8cec: Status 404 returned error can't find the container with id 1886cd56c13a0509b7eacd705cf9cac9cfaa1fdcfb33819a02888ec6afff8cec Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.813632 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="971b502e-8b71-404b-a7ca-58aa1894c648" path="/var/lib/kubelet/pods/971b502e-8b71-404b-a7ca-58aa1894c648/volumes" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.816468 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99621db9-a20f-42b1-a788-a65ad55b6a52" path="/var/lib/kubelet/pods/99621db9-a20f-42b1-a788-a65ad55b6a52/volumes" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.817122 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.817495 4713 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.827313 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.827448 4713 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1fa06026-2737-4df1-8b68-f0d6f6b1965d" Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.831559 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 15:39:29 crc kubenswrapper[4713]: I0126 15:39:29.831605 4713 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1fa06026-2737-4df1-8b68-f0d6f6b1965d" Jan 26 15:39:30 crc kubenswrapper[4713]: I0126 15:39:30.612867 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" event={"ID":"97f52d77-e5b4-4c4e-941e-bbc0fda897ba","Type":"ContainerStarted","Data":"4c2e3d8cdee510be24561ca0d38e0a76700bb351c83456feb3d4a3e89de48869"} Jan 26 15:39:30 crc kubenswrapper[4713]: I0126 15:39:30.613191 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" event={"ID":"97f52d77-e5b4-4c4e-941e-bbc0fda897ba","Type":"ContainerStarted","Data":"1886cd56c13a0509b7eacd705cf9cac9cfaa1fdcfb33819a02888ec6afff8cec"} Jan 26 15:39:30 crc kubenswrapper[4713]: I0126 15:39:30.613212 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:30 crc kubenswrapper[4713]: I0126 15:39:30.614477 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" event={"ID":"fd361ad5-372a-405c-a3a1-6104428a0137","Type":"ContainerStarted","Data":"bf688bffd050e793a68f196dc9e6abcf1099c89a3b802da3e52679be0377275f"} Jan 26 15:39:30 crc kubenswrapper[4713]: I0126 15:39:30.614528 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" event={"ID":"fd361ad5-372a-405c-a3a1-6104428a0137","Type":"ContainerStarted","Data":"f9ddf609187ef43771a90ed793f9993c4bdbd1ddc470e79f81e3554df317d9bd"} Jan 26 15:39:30 crc kubenswrapper[4713]: I0126 15:39:30.614687 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:30 crc kubenswrapper[4713]: I0126 15:39:30.620029 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:39:30 crc kubenswrapper[4713]: I0126 15:39:30.623983 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" Jan 26 15:39:30 crc kubenswrapper[4713]: I0126 15:39:30.635798 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" podStartSLOduration=3.635777927 podStartE2EDuration="3.635777927s" podCreationTimestamp="2026-01-26 15:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:39:30.632145793 +0000 UTC m=+345.769163038" watchObservedRunningTime="2026-01-26 15:39:30.635777927 +0000 UTC m=+345.772795172" Jan 26 15:39:51 crc kubenswrapper[4713]: I0126 15:39:51.772555 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67dfb78b56-lmdnd" podStartSLOduration=24.772534912 podStartE2EDuration="24.772534912s" podCreationTimestamp="2026-01-26 15:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:39:30.676675075 +0000 UTC m=+345.813692320" watchObservedRunningTime="2026-01-26 15:39:51.772534912 +0000 UTC m=+366.909552147" Jan 26 15:39:51 crc kubenswrapper[4713]: I0126 15:39:51.773325 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cbs5g"] Jan 26 15:39:51 crc kubenswrapper[4713]: I0126 15:39:51.773611 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cbs5g" podUID="02195b48-5845-4f33-861e-e6527590c4d9" containerName="registry-server" containerID="cri-o://1fee69ce490a780e1d2a5bc6bbce49acb94dfbd30fc1da8fcc5e2c564a49694c" gracePeriod=2 Jan 26 15:39:51 crc kubenswrapper[4713]: I0126 15:39:51.970582 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x29hb"] Jan 26 15:39:51 crc kubenswrapper[4713]: I0126 15:39:51.971065 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x29hb" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" containerName="registry-server" containerID="cri-o://e240cd400170094d9e7e88ae9a1e433814c4a83953eb3585f8e0a5c9824d8f52" gracePeriod=2 Jan 26 15:39:52 crc kubenswrapper[4713]: I0126 15:39:52.742672 4713 generic.go:334] "Generic (PLEG): container finished" podID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" containerID="e240cd400170094d9e7e88ae9a1e433814c4a83953eb3585f8e0a5c9824d8f52" exitCode=0 Jan 26 15:39:52 crc kubenswrapper[4713]: I0126 15:39:52.742747 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x29hb" event={"ID":"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6","Type":"ContainerDied","Data":"e240cd400170094d9e7e88ae9a1e433814c4a83953eb3585f8e0a5c9824d8f52"} Jan 26 15:39:52 crc kubenswrapper[4713]: I0126 15:39:52.745140 4713 generic.go:334] "Generic (PLEG): container finished" podID="02195b48-5845-4f33-861e-e6527590c4d9" containerID="1fee69ce490a780e1d2a5bc6bbce49acb94dfbd30fc1da8fcc5e2c564a49694c" exitCode=0 Jan 26 15:39:52 crc kubenswrapper[4713]: I0126 15:39:52.745197 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbs5g" event={"ID":"02195b48-5845-4f33-861e-e6527590c4d9","Type":"ContainerDied","Data":"1fee69ce490a780e1d2a5bc6bbce49acb94dfbd30fc1da8fcc5e2c564a49694c"} Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.352862 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.374191 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvcb9\" (UniqueName: \"kubernetes.io/projected/02195b48-5845-4f33-861e-e6527590c4d9-kube-api-access-vvcb9\") pod \"02195b48-5845-4f33-861e-e6527590c4d9\" (UID: \"02195b48-5845-4f33-861e-e6527590c4d9\") " Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.374282 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02195b48-5845-4f33-861e-e6527590c4d9-utilities\") pod \"02195b48-5845-4f33-861e-e6527590c4d9\" (UID: \"02195b48-5845-4f33-861e-e6527590c4d9\") " Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.374305 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02195b48-5845-4f33-861e-e6527590c4d9-catalog-content\") pod \"02195b48-5845-4f33-861e-e6527590c4d9\" (UID: \"02195b48-5845-4f33-861e-e6527590c4d9\") " Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.384087 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02195b48-5845-4f33-861e-e6527590c4d9-utilities" (OuterVolumeSpecName: "utilities") pod "02195b48-5845-4f33-861e-e6527590c4d9" (UID: "02195b48-5845-4f33-861e-e6527590c4d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.385789 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02195b48-5845-4f33-861e-e6527590c4d9-kube-api-access-vvcb9" (OuterVolumeSpecName: "kube-api-access-vvcb9") pod "02195b48-5845-4f33-861e-e6527590c4d9" (UID: "02195b48-5845-4f33-861e-e6527590c4d9"). InnerVolumeSpecName "kube-api-access-vvcb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.419572 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02195b48-5845-4f33-861e-e6527590c4d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02195b48-5845-4f33-861e-e6527590c4d9" (UID: "02195b48-5845-4f33-861e-e6527590c4d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.427183 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.475551 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02195b48-5845-4f33-861e-e6527590c4d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.475594 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02195b48-5845-4f33-861e-e6527590c4d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.475609 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvcb9\" (UniqueName: \"kubernetes.io/projected/02195b48-5845-4f33-861e-e6527590c4d9-kube-api-access-vvcb9\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.576646 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj9w8\" (UniqueName: \"kubernetes.io/projected/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-kube-api-access-sj9w8\") pod \"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6\" (UID: \"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6\") " Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.576902 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-utilities\") pod \"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6\" (UID: \"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6\") " Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.576933 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-catalog-content\") pod \"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6\" (UID: \"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6\") " Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.577788 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-utilities" (OuterVolumeSpecName: "utilities") pod "6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" (UID: "6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.585527 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-kube-api-access-sj9w8" (OuterVolumeSpecName: "kube-api-access-sj9w8") pod "6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" (UID: "6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6"). InnerVolumeSpecName "kube-api-access-sj9w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.633563 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" (UID: "6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.677721 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj9w8\" (UniqueName: \"kubernetes.io/projected/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-kube-api-access-sj9w8\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.677974 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.678038 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.752160 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x29hb" event={"ID":"6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6","Type":"ContainerDied","Data":"8d155a5fa3292c1c581e289b2d7535bc937387fc79cc316fb714fbdadcc6fed8"} Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.752464 4713 scope.go:117] "RemoveContainer" containerID="e240cd400170094d9e7e88ae9a1e433814c4a83953eb3585f8e0a5c9824d8f52" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.752177 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x29hb" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.754796 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbs5g" event={"ID":"02195b48-5845-4f33-861e-e6527590c4d9","Type":"ContainerDied","Data":"6d5b8e73875635358189fb9d767e328f0a37d5d1c4fb04230322343a8afdc72f"} Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.754977 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbs5g" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.768653 4713 scope.go:117] "RemoveContainer" containerID="ef691122833354f136720a3db0cf647f33f77d4d4594259ef777e2207506ae54" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.792565 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x29hb"] Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.802356 4713 scope.go:117] "RemoveContainer" containerID="743156198b7d32a0ad2259a82edf19f4fd896b3c33e78af4ddd75d6b8abbed6f" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.815455 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x29hb"] Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.815496 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cbs5g"] Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.820196 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cbs5g"] Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.832591 4713 scope.go:117] "RemoveContainer" containerID="1fee69ce490a780e1d2a5bc6bbce49acb94dfbd30fc1da8fcc5e2c564a49694c" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.847006 4713 scope.go:117] "RemoveContainer" containerID="60d69218d47c032f428e572b5b05b3bb4ed68b4dffd739a2c1ffbacb5a2a60b5" Jan 26 15:39:53 crc kubenswrapper[4713]: I0126 15:39:53.860902 4713 scope.go:117] "RemoveContainer" containerID="4d7b3d10647b8910b7342e698d4444751bacffb0c6892ad3aa5d91cb9b3a4b63" Jan 26 15:39:54 crc kubenswrapper[4713]: I0126 15:39:54.380771 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5mg77"] Jan 26 15:39:54 crc kubenswrapper[4713]: I0126 15:39:54.381073 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5mg77" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" containerName="registry-server" containerID="cri-o://b7b1e329b5b9eb758136695b206bb3c1ad48755c0ace7c910a47ef93d45eaddc" gracePeriod=2 Jan 26 15:39:54 crc kubenswrapper[4713]: I0126 15:39:54.575702 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pvkg2"] Jan 26 15:39:54 crc kubenswrapper[4713]: I0126 15:39:54.576110 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pvkg2" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" containerName="registry-server" containerID="cri-o://b28fe47b065f8d3f2a5d0a63990a294c25ad99f0142a582b3428c315dd956664" gracePeriod=2 Jan 26 15:39:54 crc kubenswrapper[4713]: I0126 15:39:54.764259 4713 generic.go:334] "Generic (PLEG): container finished" podID="b26921c6-11ce-4667-ad0c-bd7ff1366938" containerID="b28fe47b065f8d3f2a5d0a63990a294c25ad99f0142a582b3428c315dd956664" exitCode=0 Jan 26 15:39:54 crc kubenswrapper[4713]: I0126 15:39:54.764340 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvkg2" event={"ID":"b26921c6-11ce-4667-ad0c-bd7ff1366938","Type":"ContainerDied","Data":"b28fe47b065f8d3f2a5d0a63990a294c25ad99f0142a582b3428c315dd956664"} Jan 26 15:39:54 crc kubenswrapper[4713]: I0126 15:39:54.768661 4713 generic.go:334] "Generic (PLEG): container finished" podID="81c9faca-c7e6-4016-b528-5a1da4deacd7" containerID="b7b1e329b5b9eb758136695b206bb3c1ad48755c0ace7c910a47ef93d45eaddc" exitCode=0 Jan 26 15:39:54 crc kubenswrapper[4713]: I0126 15:39:54.768728 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mg77" event={"ID":"81c9faca-c7e6-4016-b528-5a1da4deacd7","Type":"ContainerDied","Data":"b7b1e329b5b9eb758136695b206bb3c1ad48755c0ace7c910a47ef93d45eaddc"} Jan 26 15:39:54 crc kubenswrapper[4713]: I0126 15:39:54.849923 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:39:54 crc kubenswrapper[4713]: I0126 15:39:54.996006 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfg5k\" (UniqueName: \"kubernetes.io/projected/81c9faca-c7e6-4016-b528-5a1da4deacd7-kube-api-access-lfg5k\") pod \"81c9faca-c7e6-4016-b528-5a1da4deacd7\" (UID: \"81c9faca-c7e6-4016-b528-5a1da4deacd7\") " Jan 26 15:39:54 crc kubenswrapper[4713]: I0126 15:39:54.996101 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c9faca-c7e6-4016-b528-5a1da4deacd7-utilities\") pod \"81c9faca-c7e6-4016-b528-5a1da4deacd7\" (UID: \"81c9faca-c7e6-4016-b528-5a1da4deacd7\") " Jan 26 15:39:54 crc kubenswrapper[4713]: I0126 15:39:54.996153 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c9faca-c7e6-4016-b528-5a1da4deacd7-catalog-content\") pod \"81c9faca-c7e6-4016-b528-5a1da4deacd7\" (UID: \"81c9faca-c7e6-4016-b528-5a1da4deacd7\") " Jan 26 15:39:54 crc kubenswrapper[4713]: I0126 15:39:54.997050 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81c9faca-c7e6-4016-b528-5a1da4deacd7-utilities" (OuterVolumeSpecName: "utilities") pod "81c9faca-c7e6-4016-b528-5a1da4deacd7" (UID: "81c9faca-c7e6-4016-b528-5a1da4deacd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.000970 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81c9faca-c7e6-4016-b528-5a1da4deacd7-kube-api-access-lfg5k" (OuterVolumeSpecName: "kube-api-access-lfg5k") pod "81c9faca-c7e6-4016-b528-5a1da4deacd7" (UID: "81c9faca-c7e6-4016-b528-5a1da4deacd7"). InnerVolumeSpecName "kube-api-access-lfg5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.025704 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81c9faca-c7e6-4016-b528-5a1da4deacd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81c9faca-c7e6-4016-b528-5a1da4deacd7" (UID: "81c9faca-c7e6-4016-b528-5a1da4deacd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.029409 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.098174 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfg5k\" (UniqueName: \"kubernetes.io/projected/81c9faca-c7e6-4016-b528-5a1da4deacd7-kube-api-access-lfg5k\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.098224 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c9faca-c7e6-4016-b528-5a1da4deacd7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.098237 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c9faca-c7e6-4016-b528-5a1da4deacd7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.199652 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b26921c6-11ce-4667-ad0c-bd7ff1366938-utilities\") pod \"b26921c6-11ce-4667-ad0c-bd7ff1366938\" (UID: \"b26921c6-11ce-4667-ad0c-bd7ff1366938\") " Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.199815 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv9pk\" (UniqueName: \"kubernetes.io/projected/b26921c6-11ce-4667-ad0c-bd7ff1366938-kube-api-access-kv9pk\") pod \"b26921c6-11ce-4667-ad0c-bd7ff1366938\" (UID: \"b26921c6-11ce-4667-ad0c-bd7ff1366938\") " Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.199863 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b26921c6-11ce-4667-ad0c-bd7ff1366938-catalog-content\") pod \"b26921c6-11ce-4667-ad0c-bd7ff1366938\" (UID: \"b26921c6-11ce-4667-ad0c-bd7ff1366938\") " Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.201194 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b26921c6-11ce-4667-ad0c-bd7ff1366938-utilities" (OuterVolumeSpecName: "utilities") pod "b26921c6-11ce-4667-ad0c-bd7ff1366938" (UID: "b26921c6-11ce-4667-ad0c-bd7ff1366938"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.202945 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26921c6-11ce-4667-ad0c-bd7ff1366938-kube-api-access-kv9pk" (OuterVolumeSpecName: "kube-api-access-kv9pk") pod "b26921c6-11ce-4667-ad0c-bd7ff1366938" (UID: "b26921c6-11ce-4667-ad0c-bd7ff1366938"). InnerVolumeSpecName "kube-api-access-kv9pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.301935 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b26921c6-11ce-4667-ad0c-bd7ff1366938-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.301989 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv9pk\" (UniqueName: \"kubernetes.io/projected/b26921c6-11ce-4667-ad0c-bd7ff1366938-kube-api-access-kv9pk\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.327720 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b26921c6-11ce-4667-ad0c-bd7ff1366938-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b26921c6-11ce-4667-ad0c-bd7ff1366938" (UID: "b26921c6-11ce-4667-ad0c-bd7ff1366938"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.403419 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b26921c6-11ce-4667-ad0c-bd7ff1366938-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.782556 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvkg2" event={"ID":"b26921c6-11ce-4667-ad0c-bd7ff1366938","Type":"ContainerDied","Data":"c886224804d0d120265badaabd055e9396818b93d2ec5c4a11c662e548aa4e8b"} Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.782628 4713 scope.go:117] "RemoveContainer" containerID="b28fe47b065f8d3f2a5d0a63990a294c25ad99f0142a582b3428c315dd956664" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.782692 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvkg2" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.787213 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mg77" event={"ID":"81c9faca-c7e6-4016-b528-5a1da4deacd7","Type":"ContainerDied","Data":"9a4664e68e03eac6315daefe3a4686a8f6f75c6d4cabada310d005409c66220c"} Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.787312 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5mg77" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.809972 4713 scope.go:117] "RemoveContainer" containerID="b794d2c20ecd400ab32cfab1f17efb0941678173f03299dfe078b96633783cd7" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.813792 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02195b48-5845-4f33-861e-e6527590c4d9" path="/var/lib/kubelet/pods/02195b48-5845-4f33-861e-e6527590c4d9/volumes" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.815327 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" path="/var/lib/kubelet/pods/6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6/volumes" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.840683 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pvkg2"] Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.850505 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pvkg2"] Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.854217 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5mg77"] Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.858502 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5mg77"] Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.865047 4713 scope.go:117] "RemoveContainer" containerID="a0055e88063a1324d2a8502b0fba4082387b4b0284898fa8468a86f6fd961c8d" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.878050 4713 scope.go:117] "RemoveContainer" containerID="b7b1e329b5b9eb758136695b206bb3c1ad48755c0ace7c910a47ef93d45eaddc" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.897413 4713 scope.go:117] "RemoveContainer" containerID="85027c2bd1df3c84c1fb207919564a9bb15fb82c858dd52edf17921038cc6991" Jan 26 15:39:55 crc kubenswrapper[4713]: I0126 15:39:55.912043 4713 scope.go:117] "RemoveContainer" containerID="b5c6356a0583f0e503110859d3f03cfc893257533e1dbdebfc12891d26e276e8" Jan 26 15:39:57 crc kubenswrapper[4713]: I0126 15:39:57.822773 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" path="/var/lib/kubelet/pods/81c9faca-c7e6-4016-b528-5a1da4deacd7/volumes" Jan 26 15:39:57 crc kubenswrapper[4713]: I0126 15:39:57.824332 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" path="/var/lib/kubelet/pods/b26921c6-11ce-4667-ad0c-bd7ff1366938/volumes" Jan 26 15:40:01 crc kubenswrapper[4713]: I0126 15:40:01.979808 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn"] Jan 26 15:40:01 crc kubenswrapper[4713]: I0126 15:40:01.980288 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" podUID="97f52d77-e5b4-4c4e-941e-bbc0fda897ba" containerName="route-controller-manager" containerID="cri-o://4c2e3d8cdee510be24561ca0d38e0a76700bb351c83456feb3d4a3e89de48869" gracePeriod=30 Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.440679 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.598595 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-client-ca\") pod \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.598649 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-serving-cert\") pod \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.598682 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bndmd\" (UniqueName: \"kubernetes.io/projected/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-kube-api-access-bndmd\") pod \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.598705 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-config\") pod \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\" (UID: \"97f52d77-e5b4-4c4e-941e-bbc0fda897ba\") " Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.599470 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-config" (OuterVolumeSpecName: "config") pod "97f52d77-e5b4-4c4e-941e-bbc0fda897ba" (UID: "97f52d77-e5b4-4c4e-941e-bbc0fda897ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.599839 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-client-ca" (OuterVolumeSpecName: "client-ca") pod "97f52d77-e5b4-4c4e-941e-bbc0fda897ba" (UID: "97f52d77-e5b4-4c4e-941e-bbc0fda897ba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.619675 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "97f52d77-e5b4-4c4e-941e-bbc0fda897ba" (UID: "97f52d77-e5b4-4c4e-941e-bbc0fda897ba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.619755 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-kube-api-access-bndmd" (OuterVolumeSpecName: "kube-api-access-bndmd") pod "97f52d77-e5b4-4c4e-941e-bbc0fda897ba" (UID: "97f52d77-e5b4-4c4e-941e-bbc0fda897ba"). InnerVolumeSpecName "kube-api-access-bndmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.699678 4713 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.699730 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bndmd\" (UniqueName: \"kubernetes.io/projected/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-kube-api-access-bndmd\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.699774 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.699794 4713 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f52d77-e5b4-4c4e-941e-bbc0fda897ba-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.864050 4713 generic.go:334] "Generic (PLEG): container finished" podID="97f52d77-e5b4-4c4e-941e-bbc0fda897ba" containerID="4c2e3d8cdee510be24561ca0d38e0a76700bb351c83456feb3d4a3e89de48869" exitCode=0 Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.864441 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" event={"ID":"97f52d77-e5b4-4c4e-941e-bbc0fda897ba","Type":"ContainerDied","Data":"4c2e3d8cdee510be24561ca0d38e0a76700bb351c83456feb3d4a3e89de48869"} Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.864524 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" event={"ID":"97f52d77-e5b4-4c4e-941e-bbc0fda897ba","Type":"ContainerDied","Data":"1886cd56c13a0509b7eacd705cf9cac9cfaa1fdcfb33819a02888ec6afff8cec"} Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.864571 4713 scope.go:117] "RemoveContainer" containerID="4c2e3d8cdee510be24561ca0d38e0a76700bb351c83456feb3d4a3e89de48869" Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.864777 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn" Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.887131 4713 scope.go:117] "RemoveContainer" containerID="4c2e3d8cdee510be24561ca0d38e0a76700bb351c83456feb3d4a3e89de48869" Jan 26 15:40:02 crc kubenswrapper[4713]: E0126 15:40:02.887569 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c2e3d8cdee510be24561ca0d38e0a76700bb351c83456feb3d4a3e89de48869\": container with ID starting with 4c2e3d8cdee510be24561ca0d38e0a76700bb351c83456feb3d4a3e89de48869 not found: ID does not exist" containerID="4c2e3d8cdee510be24561ca0d38e0a76700bb351c83456feb3d4a3e89de48869" Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.887610 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2e3d8cdee510be24561ca0d38e0a76700bb351c83456feb3d4a3e89de48869"} err="failed to get container status \"4c2e3d8cdee510be24561ca0d38e0a76700bb351c83456feb3d4a3e89de48869\": rpc error: code = NotFound desc = could not find container \"4c2e3d8cdee510be24561ca0d38e0a76700bb351c83456feb3d4a3e89de48869\": container with ID starting with 4c2e3d8cdee510be24561ca0d38e0a76700bb351c83456feb3d4a3e89de48869 not found: ID does not exist" Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.901093 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn"] Jan 26 15:40:02 crc kubenswrapper[4713]: I0126 15:40:02.908351 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588756b8c7-q86nn"] Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186107 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b"] Jan 26 15:40:03 crc kubenswrapper[4713]: E0126 15:40:03.186390 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f52d77-e5b4-4c4e-941e-bbc0fda897ba" containerName="route-controller-manager" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186403 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f52d77-e5b4-4c4e-941e-bbc0fda897ba" containerName="route-controller-manager" Jan 26 15:40:03 crc kubenswrapper[4713]: E0126 15:40:03.186416 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" containerName="extract-content" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186425 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" containerName="extract-content" Jan 26 15:40:03 crc kubenswrapper[4713]: E0126 15:40:03.186434 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" containerName="extract-utilities" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186461 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" containerName="extract-utilities" Jan 26 15:40:03 crc kubenswrapper[4713]: E0126 15:40:03.186470 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" containerName="registry-server" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186475 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" containerName="registry-server" Jan 26 15:40:03 crc kubenswrapper[4713]: E0126 15:40:03.186485 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02195b48-5845-4f33-861e-e6527590c4d9" containerName="extract-utilities" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186490 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="02195b48-5845-4f33-861e-e6527590c4d9" containerName="extract-utilities" Jan 26 15:40:03 crc kubenswrapper[4713]: E0126 15:40:03.186499 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" containerName="registry-server" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186504 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" containerName="registry-server" Jan 26 15:40:03 crc kubenswrapper[4713]: E0126 15:40:03.186514 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02195b48-5845-4f33-861e-e6527590c4d9" containerName="registry-server" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186519 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="02195b48-5845-4f33-861e-e6527590c4d9" containerName="registry-server" Jan 26 15:40:03 crc kubenswrapper[4713]: E0126 15:40:03.186549 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" containerName="extract-content" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186555 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" containerName="extract-content" Jan 26 15:40:03 crc kubenswrapper[4713]: E0126 15:40:03.186566 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02195b48-5845-4f33-861e-e6527590c4d9" containerName="extract-content" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186571 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="02195b48-5845-4f33-861e-e6527590c4d9" containerName="extract-content" Jan 26 15:40:03 crc kubenswrapper[4713]: E0126 15:40:03.186579 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" containerName="extract-utilities" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186585 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" containerName="extract-utilities" Jan 26 15:40:03 crc kubenswrapper[4713]: E0126 15:40:03.186594 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" containerName="registry-server" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186619 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" containerName="registry-server" Jan 26 15:40:03 crc kubenswrapper[4713]: E0126 15:40:03.186629 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" containerName="extract-utilities" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186634 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" containerName="extract-utilities" Jan 26 15:40:03 crc kubenswrapper[4713]: E0126 15:40:03.186644 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" containerName="extract-content" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186651 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" containerName="extract-content" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186777 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="02195b48-5845-4f33-861e-e6527590c4d9" containerName="registry-server" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186790 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="81c9faca-c7e6-4016-b528-5a1da4deacd7" containerName="registry-server" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186800 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f52d77-e5b4-4c4e-941e-bbc0fda897ba" containerName="route-controller-manager" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186809 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cdc8e5a-b873-4cd2-aa55-377f7d19f6c6" containerName="registry-server" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.186816 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26921c6-11ce-4667-ad0c-bd7ff1366938" containerName="registry-server" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.187259 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.193439 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.193707 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.194511 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.194819 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.195091 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.195147 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b"] Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.195222 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.212355 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hhs8\" (UniqueName: \"kubernetes.io/projected/eac0fd33-36a6-4330-925f-97c973421863-kube-api-access-9hhs8\") pod \"route-controller-manager-5b8b6b7498-t245b\" (UID: \"eac0fd33-36a6-4330-925f-97c973421863\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.212513 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eac0fd33-36a6-4330-925f-97c973421863-serving-cert\") pod \"route-controller-manager-5b8b6b7498-t245b\" (UID: \"eac0fd33-36a6-4330-925f-97c973421863\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.212594 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac0fd33-36a6-4330-925f-97c973421863-config\") pod \"route-controller-manager-5b8b6b7498-t245b\" (UID: \"eac0fd33-36a6-4330-925f-97c973421863\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.212654 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eac0fd33-36a6-4330-925f-97c973421863-client-ca\") pod \"route-controller-manager-5b8b6b7498-t245b\" (UID: \"eac0fd33-36a6-4330-925f-97c973421863\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.301561 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.301619 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.314062 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eac0fd33-36a6-4330-925f-97c973421863-client-ca\") pod \"route-controller-manager-5b8b6b7498-t245b\" (UID: \"eac0fd33-36a6-4330-925f-97c973421863\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.314159 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hhs8\" (UniqueName: \"kubernetes.io/projected/eac0fd33-36a6-4330-925f-97c973421863-kube-api-access-9hhs8\") pod \"route-controller-manager-5b8b6b7498-t245b\" (UID: \"eac0fd33-36a6-4330-925f-97c973421863\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.314242 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eac0fd33-36a6-4330-925f-97c973421863-serving-cert\") pod \"route-controller-manager-5b8b6b7498-t245b\" (UID: \"eac0fd33-36a6-4330-925f-97c973421863\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.314312 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac0fd33-36a6-4330-925f-97c973421863-config\") pod \"route-controller-manager-5b8b6b7498-t245b\" (UID: \"eac0fd33-36a6-4330-925f-97c973421863\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.315073 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eac0fd33-36a6-4330-925f-97c973421863-client-ca\") pod \"route-controller-manager-5b8b6b7498-t245b\" (UID: \"eac0fd33-36a6-4330-925f-97c973421863\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.316276 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac0fd33-36a6-4330-925f-97c973421863-config\") pod \"route-controller-manager-5b8b6b7498-t245b\" (UID: \"eac0fd33-36a6-4330-925f-97c973421863\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.319925 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eac0fd33-36a6-4330-925f-97c973421863-serving-cert\") pod \"route-controller-manager-5b8b6b7498-t245b\" (UID: \"eac0fd33-36a6-4330-925f-97c973421863\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.332870 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hhs8\" (UniqueName: \"kubernetes.io/projected/eac0fd33-36a6-4330-925f-97c973421863-kube-api-access-9hhs8\") pod \"route-controller-manager-5b8b6b7498-t245b\" (UID: \"eac0fd33-36a6-4330-925f-97c973421863\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.510765 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.814598 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97f52d77-e5b4-4c4e-941e-bbc0fda897ba" path="/var/lib/kubelet/pods/97f52d77-e5b4-4c4e-941e-bbc0fda897ba/volumes" Jan 26 15:40:03 crc kubenswrapper[4713]: I0126 15:40:03.942615 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b"] Jan 26 15:40:03 crc kubenswrapper[4713]: W0126 15:40:03.949134 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeac0fd33_36a6_4330_925f_97c973421863.slice/crio-1223726af2a54bf1746bc044076f3e6f7ff3fd9fb3032043727d8a545b54b4e8 WatchSource:0}: Error finding container 1223726af2a54bf1746bc044076f3e6f7ff3fd9fb3032043727d8a545b54b4e8: Status 404 returned error can't find the container with id 1223726af2a54bf1746bc044076f3e6f7ff3fd9fb3032043727d8a545b54b4e8 Jan 26 15:40:04 crc kubenswrapper[4713]: I0126 15:40:04.878104 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" event={"ID":"eac0fd33-36a6-4330-925f-97c973421863","Type":"ContainerStarted","Data":"bbb298356a33d3b33ca1e09b3fc9a7d3eaf5bd262874ea86edc87453eaeec0d2"} Jan 26 15:40:04 crc kubenswrapper[4713]: I0126 15:40:04.878329 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" event={"ID":"eac0fd33-36a6-4330-925f-97c973421863","Type":"ContainerStarted","Data":"1223726af2a54bf1746bc044076f3e6f7ff3fd9fb3032043727d8a545b54b4e8"} Jan 26 15:40:04 crc kubenswrapper[4713]: I0126 15:40:04.878513 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:04 crc kubenswrapper[4713]: I0126 15:40:04.884109 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" Jan 26 15:40:04 crc kubenswrapper[4713]: I0126 15:40:04.896330 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-t245b" podStartSLOduration=3.896305987 podStartE2EDuration="3.896305987s" podCreationTimestamp="2026-01-26 15:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:40:04.894416038 +0000 UTC m=+380.031433313" watchObservedRunningTime="2026-01-26 15:40:04.896305987 +0000 UTC m=+380.033323222" Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.792640 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4j4cb"] Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.793513 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4j4cb" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" containerName="registry-server" containerID="cri-o://c5ef793df860280a1a8f972035348631eb182bdc49fc482c640cbb1c365f1740" gracePeriod=30 Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.807589 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jd4ff"] Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.807870 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jd4ff" podUID="34325b63-2012-4f82-8860-c88e2847683b" containerName="registry-server" containerID="cri-o://3b688c9709113bdf15283eaf85b8703d2f42d48ff4a4b590d150d9fa3913008f" gracePeriod=30 Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.814887 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-574q9"] Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.817438 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-574q9" podUID="ca13e433-706e-4733-97e9-5ef2af9d4d19" containerName="marketplace-operator" containerID="cri-o://1d4816464b0fa1f72dcdae22767efdaded9b700e5839f86c9e860fe6513e353e" gracePeriod=30 Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.821221 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dpdqx"] Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.821532 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dpdqx" podUID="d7259d39-ff96-407d-b595-119128ba5677" containerName="registry-server" containerID="cri-o://ff34d93c0ec94755250d1375aa3c4ceb29b1cc04b990337b96e3d5eac8944246" gracePeriod=30 Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.836280 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4q88z"] Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.837077 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.845967 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jpzjd"] Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.846572 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jpzjd" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" containerName="registry-server" containerID="cri-o://5df0c11aceab7acc84323883cb1bb9fafcbc10fa0fcfb674c64ca06e287cf4c7" gracePeriod=30 Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.852737 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4q88z"] Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.962057 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d260b45-a0d0-4b98-9f8f-96d788e6d145-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4q88z\" (UID: \"3d260b45-a0d0-4b98-9f8f-96d788e6d145\") " pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.962169 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3d260b45-a0d0-4b98-9f8f-96d788e6d145-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4q88z\" (UID: \"3d260b45-a0d0-4b98-9f8f-96d788e6d145\") " pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" Jan 26 15:40:06 crc kubenswrapper[4713]: I0126 15:40:06.962207 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-248r6\" (UniqueName: \"kubernetes.io/projected/3d260b45-a0d0-4b98-9f8f-96d788e6d145-kube-api-access-248r6\") pod \"marketplace-operator-79b997595-4q88z\" (UID: \"3d260b45-a0d0-4b98-9f8f-96d788e6d145\") " pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.063017 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d260b45-a0d0-4b98-9f8f-96d788e6d145-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4q88z\" (UID: \"3d260b45-a0d0-4b98-9f8f-96d788e6d145\") " pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.063099 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3d260b45-a0d0-4b98-9f8f-96d788e6d145-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4q88z\" (UID: \"3d260b45-a0d0-4b98-9f8f-96d788e6d145\") " pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.063138 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-248r6\" (UniqueName: \"kubernetes.io/projected/3d260b45-a0d0-4b98-9f8f-96d788e6d145-kube-api-access-248r6\") pod \"marketplace-operator-79b997595-4q88z\" (UID: \"3d260b45-a0d0-4b98-9f8f-96d788e6d145\") " pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.064414 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d260b45-a0d0-4b98-9f8f-96d788e6d145-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4q88z\" (UID: \"3d260b45-a0d0-4b98-9f8f-96d788e6d145\") " pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.075143 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3d260b45-a0d0-4b98-9f8f-96d788e6d145-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4q88z\" (UID: \"3d260b45-a0d0-4b98-9f8f-96d788e6d145\") " pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.083191 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-248r6\" (UniqueName: \"kubernetes.io/projected/3d260b45-a0d0-4b98-9f8f-96d788e6d145-kube-api-access-248r6\") pod \"marketplace-operator-79b997595-4q88z\" (UID: \"3d260b45-a0d0-4b98-9f8f-96d788e6d145\") " pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.163521 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.259892 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.270396 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34325b63-2012-4f82-8860-c88e2847683b-catalog-content\") pod \"34325b63-2012-4f82-8860-c88e2847683b\" (UID: \"34325b63-2012-4f82-8860-c88e2847683b\") " Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.270438 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34325b63-2012-4f82-8860-c88e2847683b-utilities\") pod \"34325b63-2012-4f82-8860-c88e2847683b\" (UID: \"34325b63-2012-4f82-8860-c88e2847683b\") " Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.270471 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-476s2\" (UniqueName: \"kubernetes.io/projected/34325b63-2012-4f82-8860-c88e2847683b-kube-api-access-476s2\") pod \"34325b63-2012-4f82-8860-c88e2847683b\" (UID: \"34325b63-2012-4f82-8860-c88e2847683b\") " Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.278781 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34325b63-2012-4f82-8860-c88e2847683b-utilities" (OuterVolumeSpecName: "utilities") pod "34325b63-2012-4f82-8860-c88e2847683b" (UID: "34325b63-2012-4f82-8860-c88e2847683b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.280946 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34325b63-2012-4f82-8860-c88e2847683b-kube-api-access-476s2" (OuterVolumeSpecName: "kube-api-access-476s2") pod "34325b63-2012-4f82-8860-c88e2847683b" (UID: "34325b63-2012-4f82-8860-c88e2847683b"). InnerVolumeSpecName "kube-api-access-476s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.330865 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34325b63-2012-4f82-8860-c88e2847683b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34325b63-2012-4f82-8860-c88e2847683b" (UID: "34325b63-2012-4f82-8860-c88e2847683b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.345226 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.362402 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.367946 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.372108 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk7h9\" (UniqueName: \"kubernetes.io/projected/d7259d39-ff96-407d-b595-119128ba5677-kube-api-access-xk7h9\") pod \"d7259d39-ff96-407d-b595-119128ba5677\" (UID: \"d7259d39-ff96-407d-b595-119128ba5677\") " Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.372170 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7263c807-ae6d-4fd4-af54-8372275f5c9a-utilities\") pod \"7263c807-ae6d-4fd4-af54-8372275f5c9a\" (UID: \"7263c807-ae6d-4fd4-af54-8372275f5c9a\") " Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.372211 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7259d39-ff96-407d-b595-119128ba5677-catalog-content\") pod \"d7259d39-ff96-407d-b595-119128ba5677\" (UID: \"d7259d39-ff96-407d-b595-119128ba5677\") " Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.372241 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7259d39-ff96-407d-b595-119128ba5677-utilities\") pod \"d7259d39-ff96-407d-b595-119128ba5677\" (UID: \"d7259d39-ff96-407d-b595-119128ba5677\") " Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.372264 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7263c807-ae6d-4fd4-af54-8372275f5c9a-catalog-content\") pod \"7263c807-ae6d-4fd4-af54-8372275f5c9a\" (UID: \"7263c807-ae6d-4fd4-af54-8372275f5c9a\") " Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.372307 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67fhm\" (UniqueName: \"kubernetes.io/projected/7263c807-ae6d-4fd4-af54-8372275f5c9a-kube-api-access-67fhm\") pod \"7263c807-ae6d-4fd4-af54-8372275f5c9a\" (UID: \"7263c807-ae6d-4fd4-af54-8372275f5c9a\") " Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.372591 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-476s2\" (UniqueName: \"kubernetes.io/projected/34325b63-2012-4f82-8860-c88e2847683b-kube-api-access-476s2\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.372610 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34325b63-2012-4f82-8860-c88e2847683b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.372622 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34325b63-2012-4f82-8860-c88e2847683b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.375007 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7263c807-ae6d-4fd4-af54-8372275f5c9a-utilities" (OuterVolumeSpecName: "utilities") pod "7263c807-ae6d-4fd4-af54-8372275f5c9a" (UID: "7263c807-ae6d-4fd4-af54-8372275f5c9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.375135 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7259d39-ff96-407d-b595-119128ba5677-utilities" (OuterVolumeSpecName: "utilities") pod "d7259d39-ff96-407d-b595-119128ba5677" (UID: "d7259d39-ff96-407d-b595-119128ba5677"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.379303 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7259d39-ff96-407d-b595-119128ba5677-kube-api-access-xk7h9" (OuterVolumeSpecName: "kube-api-access-xk7h9") pod "d7259d39-ff96-407d-b595-119128ba5677" (UID: "d7259d39-ff96-407d-b595-119128ba5677"). InnerVolumeSpecName "kube-api-access-xk7h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.380595 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7263c807-ae6d-4fd4-af54-8372275f5c9a-kube-api-access-67fhm" (OuterVolumeSpecName: "kube-api-access-67fhm") pod "7263c807-ae6d-4fd4-af54-8372275f5c9a" (UID: "7263c807-ae6d-4fd4-af54-8372275f5c9a"). InnerVolumeSpecName "kube-api-access-67fhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.382683 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-574q9" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.405393 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7259d39-ff96-407d-b595-119128ba5677-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7259d39-ff96-407d-b595-119128ba5677" (UID: "d7259d39-ff96-407d-b595-119128ba5677"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.450058 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7263c807-ae6d-4fd4-af54-8372275f5c9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7263c807-ae6d-4fd4-af54-8372275f5c9a" (UID: "7263c807-ae6d-4fd4-af54-8372275f5c9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.473874 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca13e433-706e-4733-97e9-5ef2af9d4d19-marketplace-trusted-ca\") pod \"ca13e433-706e-4733-97e9-5ef2af9d4d19\" (UID: \"ca13e433-706e-4733-97e9-5ef2af9d4d19\") " Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.473920 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-catalog-content\") pod \"2cfb6957-a47e-4a83-befa-dbfc6a986ee9\" (UID: \"2cfb6957-a47e-4a83-befa-dbfc6a986ee9\") " Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.473967 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca13e433-706e-4733-97e9-5ef2af9d4d19-marketplace-operator-metrics\") pod \"ca13e433-706e-4733-97e9-5ef2af9d4d19\" (UID: \"ca13e433-706e-4733-97e9-5ef2af9d4d19\") " Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.474028 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb4hm\" (UniqueName: \"kubernetes.io/projected/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-kube-api-access-nb4hm\") pod \"2cfb6957-a47e-4a83-befa-dbfc6a986ee9\" (UID: \"2cfb6957-a47e-4a83-befa-dbfc6a986ee9\") " Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.474054 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nmn7\" (UniqueName: \"kubernetes.io/projected/ca13e433-706e-4733-97e9-5ef2af9d4d19-kube-api-access-8nmn7\") pod \"ca13e433-706e-4733-97e9-5ef2af9d4d19\" (UID: \"ca13e433-706e-4733-97e9-5ef2af9d4d19\") " Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.474114 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-utilities\") pod \"2cfb6957-a47e-4a83-befa-dbfc6a986ee9\" (UID: \"2cfb6957-a47e-4a83-befa-dbfc6a986ee9\") " Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.474354 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk7h9\" (UniqueName: \"kubernetes.io/projected/d7259d39-ff96-407d-b595-119128ba5677-kube-api-access-xk7h9\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.475399 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7263c807-ae6d-4fd4-af54-8372275f5c9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.475421 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7259d39-ff96-407d-b595-119128ba5677-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.475434 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7259d39-ff96-407d-b595-119128ba5677-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.475445 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7263c807-ae6d-4fd4-af54-8372275f5c9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.475457 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67fhm\" (UniqueName: \"kubernetes.io/projected/7263c807-ae6d-4fd4-af54-8372275f5c9a-kube-api-access-67fhm\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.474529 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca13e433-706e-4733-97e9-5ef2af9d4d19-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ca13e433-706e-4733-97e9-5ef2af9d4d19" (UID: "ca13e433-706e-4733-97e9-5ef2af9d4d19"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.475532 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-utilities" (OuterVolumeSpecName: "utilities") pod "2cfb6957-a47e-4a83-befa-dbfc6a986ee9" (UID: "2cfb6957-a47e-4a83-befa-dbfc6a986ee9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.477251 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-kube-api-access-nb4hm" (OuterVolumeSpecName: "kube-api-access-nb4hm") pod "2cfb6957-a47e-4a83-befa-dbfc6a986ee9" (UID: "2cfb6957-a47e-4a83-befa-dbfc6a986ee9"). InnerVolumeSpecName "kube-api-access-nb4hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.477623 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca13e433-706e-4733-97e9-5ef2af9d4d19-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ca13e433-706e-4733-97e9-5ef2af9d4d19" (UID: "ca13e433-706e-4733-97e9-5ef2af9d4d19"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.477916 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca13e433-706e-4733-97e9-5ef2af9d4d19-kube-api-access-8nmn7" (OuterVolumeSpecName: "kube-api-access-8nmn7") pod "ca13e433-706e-4733-97e9-5ef2af9d4d19" (UID: "ca13e433-706e-4733-97e9-5ef2af9d4d19"). InnerVolumeSpecName "kube-api-access-8nmn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.576684 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.576718 4713 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca13e433-706e-4733-97e9-5ef2af9d4d19-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.576729 4713 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca13e433-706e-4733-97e9-5ef2af9d4d19-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.576744 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb4hm\" (UniqueName: \"kubernetes.io/projected/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-kube-api-access-nb4hm\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.577417 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nmn7\" (UniqueName: \"kubernetes.io/projected/ca13e433-706e-4733-97e9-5ef2af9d4d19-kube-api-access-8nmn7\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.595264 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cfb6957-a47e-4a83-befa-dbfc6a986ee9" (UID: "2cfb6957-a47e-4a83-befa-dbfc6a986ee9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.678443 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cfb6957-a47e-4a83-befa-dbfc6a986ee9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.714595 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4q88z"] Jan 26 15:40:07 crc kubenswrapper[4713]: W0126 15:40:07.717702 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d260b45_a0d0_4b98_9f8f_96d788e6d145.slice/crio-1ffd41ece3ce3419aadb649d600b9bbc09503080240dac4d0ca180fb78b0ca62 WatchSource:0}: Error finding container 1ffd41ece3ce3419aadb649d600b9bbc09503080240dac4d0ca180fb78b0ca62: Status 404 returned error can't find the container with id 1ffd41ece3ce3419aadb649d600b9bbc09503080240dac4d0ca180fb78b0ca62 Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.901091 4713 generic.go:334] "Generic (PLEG): container finished" podID="7263c807-ae6d-4fd4-af54-8372275f5c9a" containerID="c5ef793df860280a1a8f972035348631eb182bdc49fc482c640cbb1c365f1740" exitCode=0 Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.902342 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4j4cb" event={"ID":"7263c807-ae6d-4fd4-af54-8372275f5c9a","Type":"ContainerDied","Data":"c5ef793df860280a1a8f972035348631eb182bdc49fc482c640cbb1c365f1740"} Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.902566 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4j4cb" event={"ID":"7263c807-ae6d-4fd4-af54-8372275f5c9a","Type":"ContainerDied","Data":"a0d63a49b3bc5aa4346af01a510d7f66456ce45ba4a5b2e6fcedeb521dca0076"} Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.902678 4713 scope.go:117] "RemoveContainer" containerID="c5ef793df860280a1a8f972035348631eb182bdc49fc482c640cbb1c365f1740" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.902995 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4j4cb" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.912419 4713 generic.go:334] "Generic (PLEG): container finished" podID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" containerID="5df0c11aceab7acc84323883cb1bb9fafcbc10fa0fcfb674c64ca06e287cf4c7" exitCode=0 Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.912828 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpzjd" event={"ID":"2cfb6957-a47e-4a83-befa-dbfc6a986ee9","Type":"ContainerDied","Data":"5df0c11aceab7acc84323883cb1bb9fafcbc10fa0fcfb674c64ca06e287cf4c7"} Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.913054 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpzjd" event={"ID":"2cfb6957-a47e-4a83-befa-dbfc6a986ee9","Type":"ContainerDied","Data":"9652d0c8be9e4999859bb3174a1c5cf058fd05c081e70407f8e970fdda85bc1a"} Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.913219 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpzjd" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.919776 4713 generic.go:334] "Generic (PLEG): container finished" podID="d7259d39-ff96-407d-b595-119128ba5677" containerID="ff34d93c0ec94755250d1375aa3c4ceb29b1cc04b990337b96e3d5eac8944246" exitCode=0 Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.920689 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpdqx" event={"ID":"d7259d39-ff96-407d-b595-119128ba5677","Type":"ContainerDied","Data":"ff34d93c0ec94755250d1375aa3c4ceb29b1cc04b990337b96e3d5eac8944246"} Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.920736 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpdqx" event={"ID":"d7259d39-ff96-407d-b595-119128ba5677","Type":"ContainerDied","Data":"d94549e467a8bc8429f2eeaa0f5268cd2337d28da17c8e9dd26bc37ac61cbddb"} Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.920775 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dpdqx" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.929688 4713 scope.go:117] "RemoveContainer" containerID="c806a7bf87d3e56bbfca04de49b6567022dfdff5e3366ae22729c0ca56fcb4be" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.931957 4713 generic.go:334] "Generic (PLEG): container finished" podID="34325b63-2012-4f82-8860-c88e2847683b" containerID="3b688c9709113bdf15283eaf85b8703d2f42d48ff4a4b590d150d9fa3913008f" exitCode=0 Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.932048 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd4ff" event={"ID":"34325b63-2012-4f82-8860-c88e2847683b","Type":"ContainerDied","Data":"3b688c9709113bdf15283eaf85b8703d2f42d48ff4a4b590d150d9fa3913008f"} Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.932091 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd4ff" event={"ID":"34325b63-2012-4f82-8860-c88e2847683b","Type":"ContainerDied","Data":"c3b0baa95c1e5b1ef0eb501bbf93e82d99c063b566c13047843fcb9fdb55a79c"} Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.932115 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jd4ff" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.934252 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4j4cb"] Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.936733 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" event={"ID":"3d260b45-a0d0-4b98-9f8f-96d788e6d145","Type":"ContainerStarted","Data":"605d1b21a1e4f15056f15f160d8465ec29c13ccc7131268a3f160970f9ba315e"} Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.936777 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" event={"ID":"3d260b45-a0d0-4b98-9f8f-96d788e6d145","Type":"ContainerStarted","Data":"1ffd41ece3ce3419aadb649d600b9bbc09503080240dac4d0ca180fb78b0ca62"} Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.936803 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.937854 4713 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4q88z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" start-of-body= Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.937891 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" podUID="3d260b45-a0d0-4b98-9f8f-96d788e6d145" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.946468 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4j4cb"] Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.949846 4713 generic.go:334] "Generic (PLEG): container finished" podID="ca13e433-706e-4733-97e9-5ef2af9d4d19" containerID="1d4816464b0fa1f72dcdae22767efdaded9b700e5839f86c9e860fe6513e353e" exitCode=0 Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.949898 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-574q9" event={"ID":"ca13e433-706e-4733-97e9-5ef2af9d4d19","Type":"ContainerDied","Data":"1d4816464b0fa1f72dcdae22767efdaded9b700e5839f86c9e860fe6513e353e"} Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.949928 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-574q9" event={"ID":"ca13e433-706e-4733-97e9-5ef2af9d4d19","Type":"ContainerDied","Data":"b029de6ca58b4709cddbea8a57cabd65ba00ec5bbbfe281b134befc4a8afe312"} Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.949930 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-574q9" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.951124 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jpzjd"] Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.963174 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jpzjd"] Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.987846 4713 scope.go:117] "RemoveContainer" containerID="5aae2e4cf432f96f0880e9fc16f2e81d58035888550fd797b80206cb7977de0c" Jan 26 15:40:07 crc kubenswrapper[4713]: I0126 15:40:07.988909 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" podStartSLOduration=1.98889048 podStartE2EDuration="1.98889048s" podCreationTimestamp="2026-01-26 15:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:40:07.9872663 +0000 UTC m=+383.124283545" watchObservedRunningTime="2026-01-26 15:40:07.98889048 +0000 UTC m=+383.125907715" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.020996 4713 scope.go:117] "RemoveContainer" containerID="c5ef793df860280a1a8f972035348631eb182bdc49fc482c640cbb1c365f1740" Jan 26 15:40:08 crc kubenswrapper[4713]: E0126 15:40:08.021565 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5ef793df860280a1a8f972035348631eb182bdc49fc482c640cbb1c365f1740\": container with ID starting with c5ef793df860280a1a8f972035348631eb182bdc49fc482c640cbb1c365f1740 not found: ID does not exist" containerID="c5ef793df860280a1a8f972035348631eb182bdc49fc482c640cbb1c365f1740" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.021644 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5ef793df860280a1a8f972035348631eb182bdc49fc482c640cbb1c365f1740"} err="failed to get container status \"c5ef793df860280a1a8f972035348631eb182bdc49fc482c640cbb1c365f1740\": rpc error: code = NotFound desc = could not find container \"c5ef793df860280a1a8f972035348631eb182bdc49fc482c640cbb1c365f1740\": container with ID starting with c5ef793df860280a1a8f972035348631eb182bdc49fc482c640cbb1c365f1740 not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.021684 4713 scope.go:117] "RemoveContainer" containerID="c806a7bf87d3e56bbfca04de49b6567022dfdff5e3366ae22729c0ca56fcb4be" Jan 26 15:40:08 crc kubenswrapper[4713]: E0126 15:40:08.022345 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c806a7bf87d3e56bbfca04de49b6567022dfdff5e3366ae22729c0ca56fcb4be\": container with ID starting with c806a7bf87d3e56bbfca04de49b6567022dfdff5e3366ae22729c0ca56fcb4be not found: ID does not exist" containerID="c806a7bf87d3e56bbfca04de49b6567022dfdff5e3366ae22729c0ca56fcb4be" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.022392 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c806a7bf87d3e56bbfca04de49b6567022dfdff5e3366ae22729c0ca56fcb4be"} err="failed to get container status \"c806a7bf87d3e56bbfca04de49b6567022dfdff5e3366ae22729c0ca56fcb4be\": rpc error: code = NotFound desc = could not find container \"c806a7bf87d3e56bbfca04de49b6567022dfdff5e3366ae22729c0ca56fcb4be\": container with ID starting with c806a7bf87d3e56bbfca04de49b6567022dfdff5e3366ae22729c0ca56fcb4be not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.022406 4713 scope.go:117] "RemoveContainer" containerID="5aae2e4cf432f96f0880e9fc16f2e81d58035888550fd797b80206cb7977de0c" Jan 26 15:40:08 crc kubenswrapper[4713]: E0126 15:40:08.022771 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aae2e4cf432f96f0880e9fc16f2e81d58035888550fd797b80206cb7977de0c\": container with ID starting with 5aae2e4cf432f96f0880e9fc16f2e81d58035888550fd797b80206cb7977de0c not found: ID does not exist" containerID="5aae2e4cf432f96f0880e9fc16f2e81d58035888550fd797b80206cb7977de0c" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.022791 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aae2e4cf432f96f0880e9fc16f2e81d58035888550fd797b80206cb7977de0c"} err="failed to get container status \"5aae2e4cf432f96f0880e9fc16f2e81d58035888550fd797b80206cb7977de0c\": rpc error: code = NotFound desc = could not find container \"5aae2e4cf432f96f0880e9fc16f2e81d58035888550fd797b80206cb7977de0c\": container with ID starting with 5aae2e4cf432f96f0880e9fc16f2e81d58035888550fd797b80206cb7977de0c not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.022803 4713 scope.go:117] "RemoveContainer" containerID="5df0c11aceab7acc84323883cb1bb9fafcbc10fa0fcfb674c64ca06e287cf4c7" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.040560 4713 scope.go:117] "RemoveContainer" containerID="69adaf695760261628c5e001fe78cdf411fe83103629119862c956ed98c9c24c" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.042586 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dpdqx"] Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.054475 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dpdqx"] Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.059748 4713 scope.go:117] "RemoveContainer" containerID="0596c28793d0daae177b0e11f211271145095d73bce4d17009325d65d117f9ef" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.063298 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jd4ff"] Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.070325 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jd4ff"] Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.077871 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-574q9"] Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.080023 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-574q9"] Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.086138 4713 scope.go:117] "RemoveContainer" containerID="5df0c11aceab7acc84323883cb1bb9fafcbc10fa0fcfb674c64ca06e287cf4c7" Jan 26 15:40:08 crc kubenswrapper[4713]: E0126 15:40:08.086689 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5df0c11aceab7acc84323883cb1bb9fafcbc10fa0fcfb674c64ca06e287cf4c7\": container with ID starting with 5df0c11aceab7acc84323883cb1bb9fafcbc10fa0fcfb674c64ca06e287cf4c7 not found: ID does not exist" containerID="5df0c11aceab7acc84323883cb1bb9fafcbc10fa0fcfb674c64ca06e287cf4c7" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.086755 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df0c11aceab7acc84323883cb1bb9fafcbc10fa0fcfb674c64ca06e287cf4c7"} err="failed to get container status \"5df0c11aceab7acc84323883cb1bb9fafcbc10fa0fcfb674c64ca06e287cf4c7\": rpc error: code = NotFound desc = could not find container \"5df0c11aceab7acc84323883cb1bb9fafcbc10fa0fcfb674c64ca06e287cf4c7\": container with ID starting with 5df0c11aceab7acc84323883cb1bb9fafcbc10fa0fcfb674c64ca06e287cf4c7 not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.086784 4713 scope.go:117] "RemoveContainer" containerID="69adaf695760261628c5e001fe78cdf411fe83103629119862c956ed98c9c24c" Jan 26 15:40:08 crc kubenswrapper[4713]: E0126 15:40:08.087268 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69adaf695760261628c5e001fe78cdf411fe83103629119862c956ed98c9c24c\": container with ID starting with 69adaf695760261628c5e001fe78cdf411fe83103629119862c956ed98c9c24c not found: ID does not exist" containerID="69adaf695760261628c5e001fe78cdf411fe83103629119862c956ed98c9c24c" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.087298 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69adaf695760261628c5e001fe78cdf411fe83103629119862c956ed98c9c24c"} err="failed to get container status \"69adaf695760261628c5e001fe78cdf411fe83103629119862c956ed98c9c24c\": rpc error: code = NotFound desc = could not find container \"69adaf695760261628c5e001fe78cdf411fe83103629119862c956ed98c9c24c\": container with ID starting with 69adaf695760261628c5e001fe78cdf411fe83103629119862c956ed98c9c24c not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.087338 4713 scope.go:117] "RemoveContainer" containerID="0596c28793d0daae177b0e11f211271145095d73bce4d17009325d65d117f9ef" Jan 26 15:40:08 crc kubenswrapper[4713]: E0126 15:40:08.087692 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0596c28793d0daae177b0e11f211271145095d73bce4d17009325d65d117f9ef\": container with ID starting with 0596c28793d0daae177b0e11f211271145095d73bce4d17009325d65d117f9ef not found: ID does not exist" containerID="0596c28793d0daae177b0e11f211271145095d73bce4d17009325d65d117f9ef" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.087732 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0596c28793d0daae177b0e11f211271145095d73bce4d17009325d65d117f9ef"} err="failed to get container status \"0596c28793d0daae177b0e11f211271145095d73bce4d17009325d65d117f9ef\": rpc error: code = NotFound desc = could not find container \"0596c28793d0daae177b0e11f211271145095d73bce4d17009325d65d117f9ef\": container with ID starting with 0596c28793d0daae177b0e11f211271145095d73bce4d17009325d65d117f9ef not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.087747 4713 scope.go:117] "RemoveContainer" containerID="ff34d93c0ec94755250d1375aa3c4ceb29b1cc04b990337b96e3d5eac8944246" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.100141 4713 scope.go:117] "RemoveContainer" containerID="0f0dd135dff68e37a6f62007c814e73bbc8c39e252b2a539af6188333ac4383f" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.113720 4713 scope.go:117] "RemoveContainer" containerID="447bec5508f4e9b7b971d146f499c712197cbfac6066626e4a765ebf43fef0fe" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.129471 4713 scope.go:117] "RemoveContainer" containerID="ff34d93c0ec94755250d1375aa3c4ceb29b1cc04b990337b96e3d5eac8944246" Jan 26 15:40:08 crc kubenswrapper[4713]: E0126 15:40:08.129829 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff34d93c0ec94755250d1375aa3c4ceb29b1cc04b990337b96e3d5eac8944246\": container with ID starting with ff34d93c0ec94755250d1375aa3c4ceb29b1cc04b990337b96e3d5eac8944246 not found: ID does not exist" containerID="ff34d93c0ec94755250d1375aa3c4ceb29b1cc04b990337b96e3d5eac8944246" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.129878 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff34d93c0ec94755250d1375aa3c4ceb29b1cc04b990337b96e3d5eac8944246"} err="failed to get container status \"ff34d93c0ec94755250d1375aa3c4ceb29b1cc04b990337b96e3d5eac8944246\": rpc error: code = NotFound desc = could not find container \"ff34d93c0ec94755250d1375aa3c4ceb29b1cc04b990337b96e3d5eac8944246\": container with ID starting with ff34d93c0ec94755250d1375aa3c4ceb29b1cc04b990337b96e3d5eac8944246 not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.129907 4713 scope.go:117] "RemoveContainer" containerID="0f0dd135dff68e37a6f62007c814e73bbc8c39e252b2a539af6188333ac4383f" Jan 26 15:40:08 crc kubenswrapper[4713]: E0126 15:40:08.130251 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f0dd135dff68e37a6f62007c814e73bbc8c39e252b2a539af6188333ac4383f\": container with ID starting with 0f0dd135dff68e37a6f62007c814e73bbc8c39e252b2a539af6188333ac4383f not found: ID does not exist" containerID="0f0dd135dff68e37a6f62007c814e73bbc8c39e252b2a539af6188333ac4383f" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.130271 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0dd135dff68e37a6f62007c814e73bbc8c39e252b2a539af6188333ac4383f"} err="failed to get container status \"0f0dd135dff68e37a6f62007c814e73bbc8c39e252b2a539af6188333ac4383f\": rpc error: code = NotFound desc = could not find container \"0f0dd135dff68e37a6f62007c814e73bbc8c39e252b2a539af6188333ac4383f\": container with ID starting with 0f0dd135dff68e37a6f62007c814e73bbc8c39e252b2a539af6188333ac4383f not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.130285 4713 scope.go:117] "RemoveContainer" containerID="447bec5508f4e9b7b971d146f499c712197cbfac6066626e4a765ebf43fef0fe" Jan 26 15:40:08 crc kubenswrapper[4713]: E0126 15:40:08.131059 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"447bec5508f4e9b7b971d146f499c712197cbfac6066626e4a765ebf43fef0fe\": container with ID starting with 447bec5508f4e9b7b971d146f499c712197cbfac6066626e4a765ebf43fef0fe not found: ID does not exist" containerID="447bec5508f4e9b7b971d146f499c712197cbfac6066626e4a765ebf43fef0fe" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.131082 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447bec5508f4e9b7b971d146f499c712197cbfac6066626e4a765ebf43fef0fe"} err="failed to get container status \"447bec5508f4e9b7b971d146f499c712197cbfac6066626e4a765ebf43fef0fe\": rpc error: code = NotFound desc = could not find container \"447bec5508f4e9b7b971d146f499c712197cbfac6066626e4a765ebf43fef0fe\": container with ID starting with 447bec5508f4e9b7b971d146f499c712197cbfac6066626e4a765ebf43fef0fe not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.131137 4713 scope.go:117] "RemoveContainer" containerID="3b688c9709113bdf15283eaf85b8703d2f42d48ff4a4b590d150d9fa3913008f" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.146394 4713 scope.go:117] "RemoveContainer" containerID="d96d8964499f5202a86b5ba55a7c4a40af4b3fe89e59900834ed5673fd12b6a2" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.173954 4713 scope.go:117] "RemoveContainer" containerID="84b407651c8ce228dbb70b2db4d513529d9ce17be4c86dff1b0ca4032949e485" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.193070 4713 scope.go:117] "RemoveContainer" containerID="3b688c9709113bdf15283eaf85b8703d2f42d48ff4a4b590d150d9fa3913008f" Jan 26 15:40:08 crc kubenswrapper[4713]: E0126 15:40:08.193533 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b688c9709113bdf15283eaf85b8703d2f42d48ff4a4b590d150d9fa3913008f\": container with ID starting with 3b688c9709113bdf15283eaf85b8703d2f42d48ff4a4b590d150d9fa3913008f not found: ID does not exist" containerID="3b688c9709113bdf15283eaf85b8703d2f42d48ff4a4b590d150d9fa3913008f" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.193574 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b688c9709113bdf15283eaf85b8703d2f42d48ff4a4b590d150d9fa3913008f"} err="failed to get container status \"3b688c9709113bdf15283eaf85b8703d2f42d48ff4a4b590d150d9fa3913008f\": rpc error: code = NotFound desc = could not find container \"3b688c9709113bdf15283eaf85b8703d2f42d48ff4a4b590d150d9fa3913008f\": container with ID starting with 3b688c9709113bdf15283eaf85b8703d2f42d48ff4a4b590d150d9fa3913008f not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.193601 4713 scope.go:117] "RemoveContainer" containerID="d96d8964499f5202a86b5ba55a7c4a40af4b3fe89e59900834ed5673fd12b6a2" Jan 26 15:40:08 crc kubenswrapper[4713]: E0126 15:40:08.193926 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d96d8964499f5202a86b5ba55a7c4a40af4b3fe89e59900834ed5673fd12b6a2\": container with ID starting with d96d8964499f5202a86b5ba55a7c4a40af4b3fe89e59900834ed5673fd12b6a2 not found: ID does not exist" containerID="d96d8964499f5202a86b5ba55a7c4a40af4b3fe89e59900834ed5673fd12b6a2" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.193958 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d96d8964499f5202a86b5ba55a7c4a40af4b3fe89e59900834ed5673fd12b6a2"} err="failed to get container status \"d96d8964499f5202a86b5ba55a7c4a40af4b3fe89e59900834ed5673fd12b6a2\": rpc error: code = NotFound desc = could not find container \"d96d8964499f5202a86b5ba55a7c4a40af4b3fe89e59900834ed5673fd12b6a2\": container with ID starting with d96d8964499f5202a86b5ba55a7c4a40af4b3fe89e59900834ed5673fd12b6a2 not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.193980 4713 scope.go:117] "RemoveContainer" containerID="84b407651c8ce228dbb70b2db4d513529d9ce17be4c86dff1b0ca4032949e485" Jan 26 15:40:08 crc kubenswrapper[4713]: E0126 15:40:08.194379 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b407651c8ce228dbb70b2db4d513529d9ce17be4c86dff1b0ca4032949e485\": container with ID starting with 84b407651c8ce228dbb70b2db4d513529d9ce17be4c86dff1b0ca4032949e485 not found: ID does not exist" containerID="84b407651c8ce228dbb70b2db4d513529d9ce17be4c86dff1b0ca4032949e485" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.194409 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b407651c8ce228dbb70b2db4d513529d9ce17be4c86dff1b0ca4032949e485"} err="failed to get container status \"84b407651c8ce228dbb70b2db4d513529d9ce17be4c86dff1b0ca4032949e485\": rpc error: code = NotFound desc = could not find container \"84b407651c8ce228dbb70b2db4d513529d9ce17be4c86dff1b0ca4032949e485\": container with ID starting with 84b407651c8ce228dbb70b2db4d513529d9ce17be4c86dff1b0ca4032949e485 not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.194427 4713 scope.go:117] "RemoveContainer" containerID="1d4816464b0fa1f72dcdae22767efdaded9b700e5839f86c9e860fe6513e353e" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.211669 4713 scope.go:117] "RemoveContainer" containerID="1d4816464b0fa1f72dcdae22767efdaded9b700e5839f86c9e860fe6513e353e" Jan 26 15:40:08 crc kubenswrapper[4713]: E0126 15:40:08.212108 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d4816464b0fa1f72dcdae22767efdaded9b700e5839f86c9e860fe6513e353e\": container with ID starting with 1d4816464b0fa1f72dcdae22767efdaded9b700e5839f86c9e860fe6513e353e not found: ID does not exist" containerID="1d4816464b0fa1f72dcdae22767efdaded9b700e5839f86c9e860fe6513e353e" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.212141 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d4816464b0fa1f72dcdae22767efdaded9b700e5839f86c9e860fe6513e353e"} err="failed to get container status \"1d4816464b0fa1f72dcdae22767efdaded9b700e5839f86c9e860fe6513e353e\": rpc error: code = NotFound desc = could not find container \"1d4816464b0fa1f72dcdae22767efdaded9b700e5839f86c9e860fe6513e353e\": container with ID starting with 1d4816464b0fa1f72dcdae22767efdaded9b700e5839f86c9e860fe6513e353e not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4713]: I0126 15:40:08.963526 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4q88z" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587059 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kkffc"] Jan 26 15:40:09 crc kubenswrapper[4713]: E0126 15:40:09.587280 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" containerName="registry-server" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587296 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" containerName="registry-server" Jan 26 15:40:09 crc kubenswrapper[4713]: E0126 15:40:09.587312 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" containerName="registry-server" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587320 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" containerName="registry-server" Jan 26 15:40:09 crc kubenswrapper[4713]: E0126 15:40:09.587330 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" containerName="extract-utilities" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587338 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" containerName="extract-utilities" Jan 26 15:40:09 crc kubenswrapper[4713]: E0126 15:40:09.587408 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" containerName="extract-utilities" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587418 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" containerName="extract-utilities" Jan 26 15:40:09 crc kubenswrapper[4713]: E0126 15:40:09.587429 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7259d39-ff96-407d-b595-119128ba5677" containerName="extract-utilities" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587436 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7259d39-ff96-407d-b595-119128ba5677" containerName="extract-utilities" Jan 26 15:40:09 crc kubenswrapper[4713]: E0126 15:40:09.587448 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34325b63-2012-4f82-8860-c88e2847683b" containerName="extract-content" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587455 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="34325b63-2012-4f82-8860-c88e2847683b" containerName="extract-content" Jan 26 15:40:09 crc kubenswrapper[4713]: E0126 15:40:09.587463 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34325b63-2012-4f82-8860-c88e2847683b" containerName="extract-utilities" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587471 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="34325b63-2012-4f82-8860-c88e2847683b" containerName="extract-utilities" Jan 26 15:40:09 crc kubenswrapper[4713]: E0126 15:40:09.587482 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" containerName="extract-content" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587491 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" containerName="extract-content" Jan 26 15:40:09 crc kubenswrapper[4713]: E0126 15:40:09.587502 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7259d39-ff96-407d-b595-119128ba5677" containerName="registry-server" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587509 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7259d39-ff96-407d-b595-119128ba5677" containerName="registry-server" Jan 26 15:40:09 crc kubenswrapper[4713]: E0126 15:40:09.587521 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca13e433-706e-4733-97e9-5ef2af9d4d19" containerName="marketplace-operator" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587528 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca13e433-706e-4733-97e9-5ef2af9d4d19" containerName="marketplace-operator" Jan 26 15:40:09 crc kubenswrapper[4713]: E0126 15:40:09.587537 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" containerName="extract-content" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587544 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" containerName="extract-content" Jan 26 15:40:09 crc kubenswrapper[4713]: E0126 15:40:09.587557 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7259d39-ff96-407d-b595-119128ba5677" containerName="extract-content" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587565 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7259d39-ff96-407d-b595-119128ba5677" containerName="extract-content" Jan 26 15:40:09 crc kubenswrapper[4713]: E0126 15:40:09.587573 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34325b63-2012-4f82-8860-c88e2847683b" containerName="registry-server" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587580 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="34325b63-2012-4f82-8860-c88e2847683b" containerName="registry-server" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587690 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="34325b63-2012-4f82-8860-c88e2847683b" containerName="registry-server" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587703 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca13e433-706e-4733-97e9-5ef2af9d4d19" containerName="marketplace-operator" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587715 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" containerName="registry-server" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587730 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" containerName="registry-server" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.587737 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7259d39-ff96-407d-b595-119128ba5677" containerName="registry-server" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.588679 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kkffc" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.590731 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.595287 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kkffc"] Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.604009 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a996c191-52e4-490d-a15a-9def9a651be5-utilities\") pod \"certified-operators-kkffc\" (UID: \"a996c191-52e4-490d-a15a-9def9a651be5\") " pod="openshift-marketplace/certified-operators-kkffc" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.604098 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rm76\" (UniqueName: \"kubernetes.io/projected/a996c191-52e4-490d-a15a-9def9a651be5-kube-api-access-9rm76\") pod \"certified-operators-kkffc\" (UID: \"a996c191-52e4-490d-a15a-9def9a651be5\") " pod="openshift-marketplace/certified-operators-kkffc" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.604135 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a996c191-52e4-490d-a15a-9def9a651be5-catalog-content\") pod \"certified-operators-kkffc\" (UID: \"a996c191-52e4-490d-a15a-9def9a651be5\") " pod="openshift-marketplace/certified-operators-kkffc" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.707594 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rm76\" (UniqueName: \"kubernetes.io/projected/a996c191-52e4-490d-a15a-9def9a651be5-kube-api-access-9rm76\") pod \"certified-operators-kkffc\" (UID: \"a996c191-52e4-490d-a15a-9def9a651be5\") " pod="openshift-marketplace/certified-operators-kkffc" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.707660 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a996c191-52e4-490d-a15a-9def9a651be5-catalog-content\") pod \"certified-operators-kkffc\" (UID: \"a996c191-52e4-490d-a15a-9def9a651be5\") " pod="openshift-marketplace/certified-operators-kkffc" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.707721 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a996c191-52e4-490d-a15a-9def9a651be5-utilities\") pod \"certified-operators-kkffc\" (UID: \"a996c191-52e4-490d-a15a-9def9a651be5\") " pod="openshift-marketplace/certified-operators-kkffc" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.708222 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a996c191-52e4-490d-a15a-9def9a651be5-catalog-content\") pod \"certified-operators-kkffc\" (UID: \"a996c191-52e4-490d-a15a-9def9a651be5\") " pod="openshift-marketplace/certified-operators-kkffc" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.708426 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a996c191-52e4-490d-a15a-9def9a651be5-utilities\") pod \"certified-operators-kkffc\" (UID: \"a996c191-52e4-490d-a15a-9def9a651be5\") " pod="openshift-marketplace/certified-operators-kkffc" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.729864 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rm76\" (UniqueName: \"kubernetes.io/projected/a996c191-52e4-490d-a15a-9def9a651be5-kube-api-access-9rm76\") pod \"certified-operators-kkffc\" (UID: \"a996c191-52e4-490d-a15a-9def9a651be5\") " pod="openshift-marketplace/certified-operators-kkffc" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.811137 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cfb6957-a47e-4a83-befa-dbfc6a986ee9" path="/var/lib/kubelet/pods/2cfb6957-a47e-4a83-befa-dbfc6a986ee9/volumes" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.812150 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34325b63-2012-4f82-8860-c88e2847683b" path="/var/lib/kubelet/pods/34325b63-2012-4f82-8860-c88e2847683b/volumes" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.812962 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7263c807-ae6d-4fd4-af54-8372275f5c9a" path="/var/lib/kubelet/pods/7263c807-ae6d-4fd4-af54-8372275f5c9a/volumes" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.814670 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca13e433-706e-4733-97e9-5ef2af9d4d19" path="/var/lib/kubelet/pods/ca13e433-706e-4733-97e9-5ef2af9d4d19/volumes" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.815280 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7259d39-ff96-407d-b595-119128ba5677" path="/var/lib/kubelet/pods/d7259d39-ff96-407d-b595-119128ba5677/volumes" Jan 26 15:40:09 crc kubenswrapper[4713]: I0126 15:40:09.901952 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kkffc" Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.180833 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dwmf8"] Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.181853 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dwmf8" Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.184989 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.186579 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dwmf8"] Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.214549 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1a9bc74-ffa8-4646-be3e-09cee80a5d04-utilities\") pod \"redhat-marketplace-dwmf8\" (UID: \"a1a9bc74-ffa8-4646-be3e-09cee80a5d04\") " pod="openshift-marketplace/redhat-marketplace-dwmf8" Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.214734 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1a9bc74-ffa8-4646-be3e-09cee80a5d04-catalog-content\") pod \"redhat-marketplace-dwmf8\" (UID: \"a1a9bc74-ffa8-4646-be3e-09cee80a5d04\") " pod="openshift-marketplace/redhat-marketplace-dwmf8" Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.214820 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b77z\" (UniqueName: \"kubernetes.io/projected/a1a9bc74-ffa8-4646-be3e-09cee80a5d04-kube-api-access-2b77z\") pod \"redhat-marketplace-dwmf8\" (UID: \"a1a9bc74-ffa8-4646-be3e-09cee80a5d04\") " pod="openshift-marketplace/redhat-marketplace-dwmf8" Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.311113 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kkffc"] Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.315710 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1a9bc74-ffa8-4646-be3e-09cee80a5d04-catalog-content\") pod \"redhat-marketplace-dwmf8\" (UID: \"a1a9bc74-ffa8-4646-be3e-09cee80a5d04\") " pod="openshift-marketplace/redhat-marketplace-dwmf8" Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.315754 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b77z\" (UniqueName: \"kubernetes.io/projected/a1a9bc74-ffa8-4646-be3e-09cee80a5d04-kube-api-access-2b77z\") pod \"redhat-marketplace-dwmf8\" (UID: \"a1a9bc74-ffa8-4646-be3e-09cee80a5d04\") " pod="openshift-marketplace/redhat-marketplace-dwmf8" Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.315809 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1a9bc74-ffa8-4646-be3e-09cee80a5d04-utilities\") pod \"redhat-marketplace-dwmf8\" (UID: \"a1a9bc74-ffa8-4646-be3e-09cee80a5d04\") " pod="openshift-marketplace/redhat-marketplace-dwmf8" Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.316332 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1a9bc74-ffa8-4646-be3e-09cee80a5d04-catalog-content\") pod \"redhat-marketplace-dwmf8\" (UID: \"a1a9bc74-ffa8-4646-be3e-09cee80a5d04\") " pod="openshift-marketplace/redhat-marketplace-dwmf8" Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.316494 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1a9bc74-ffa8-4646-be3e-09cee80a5d04-utilities\") pod \"redhat-marketplace-dwmf8\" (UID: \"a1a9bc74-ffa8-4646-be3e-09cee80a5d04\") " pod="openshift-marketplace/redhat-marketplace-dwmf8" Jan 26 15:40:10 crc kubenswrapper[4713]: W0126 15:40:10.318131 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda996c191_52e4_490d_a15a_9def9a651be5.slice/crio-bbfb89183ad836bd85a205bb0352537c0708bb051e8b55cf96c6546736255ed5 WatchSource:0}: Error finding container bbfb89183ad836bd85a205bb0352537c0708bb051e8b55cf96c6546736255ed5: Status 404 returned error can't find the container with id bbfb89183ad836bd85a205bb0352537c0708bb051e8b55cf96c6546736255ed5 Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.341744 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b77z\" (UniqueName: \"kubernetes.io/projected/a1a9bc74-ffa8-4646-be3e-09cee80a5d04-kube-api-access-2b77z\") pod \"redhat-marketplace-dwmf8\" (UID: \"a1a9bc74-ffa8-4646-be3e-09cee80a5d04\") " pod="openshift-marketplace/redhat-marketplace-dwmf8" Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.507285 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dwmf8" Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.911286 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dwmf8"] Jan 26 15:40:10 crc kubenswrapper[4713]: W0126 15:40:10.918409 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a9bc74_ffa8_4646_be3e_09cee80a5d04.slice/crio-c2c6831fc023d3dae715acaec0210639c0e772698c066901b623ede7c7ae89c9 WatchSource:0}: Error finding container c2c6831fc023d3dae715acaec0210639c0e772698c066901b623ede7c7ae89c9: Status 404 returned error can't find the container with id c2c6831fc023d3dae715acaec0210639c0e772698c066901b623ede7c7ae89c9 Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.971781 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwmf8" event={"ID":"a1a9bc74-ffa8-4646-be3e-09cee80a5d04","Type":"ContainerStarted","Data":"c2c6831fc023d3dae715acaec0210639c0e772698c066901b623ede7c7ae89c9"} Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.973788 4713 generic.go:334] "Generic (PLEG): container finished" podID="a996c191-52e4-490d-a15a-9def9a651be5" containerID="6f4d9842da842494d3f92e662871c558dddb30a3f93dbbcba4079f4a34843961" exitCode=0 Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.973830 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkffc" event={"ID":"a996c191-52e4-490d-a15a-9def9a651be5","Type":"ContainerDied","Data":"6f4d9842da842494d3f92e662871c558dddb30a3f93dbbcba4079f4a34843961"} Jan 26 15:40:10 crc kubenswrapper[4713]: I0126 15:40:10.973855 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkffc" event={"ID":"a996c191-52e4-490d-a15a-9def9a651be5","Type":"ContainerStarted","Data":"bbfb89183ad836bd85a205bb0352537c0708bb051e8b55cf96c6546736255ed5"} Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.182010 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xx7m2"] Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.182831 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.193796 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xx7m2"] Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.241449 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.241500 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8d94b8af-356d-4ee9-9140-14cb9620b86f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.241534 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8d94b8af-356d-4ee9-9140-14cb9620b86f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.241569 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8d94b8af-356d-4ee9-9140-14cb9620b86f-registry-certificates\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.241591 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d94b8af-356d-4ee9-9140-14cb9620b86f-bound-sa-token\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.241733 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8d94b8af-356d-4ee9-9140-14cb9620b86f-registry-tls\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.241760 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d94b8af-356d-4ee9-9140-14cb9620b86f-trusted-ca\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.241777 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7h2c\" (UniqueName: \"kubernetes.io/projected/8d94b8af-356d-4ee9-9140-14cb9620b86f-kube-api-access-c7h2c\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.271415 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.344972 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8d94b8af-356d-4ee9-9140-14cb9620b86f-registry-certificates\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.345045 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d94b8af-356d-4ee9-9140-14cb9620b86f-bound-sa-token\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.345087 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8d94b8af-356d-4ee9-9140-14cb9620b86f-registry-tls\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.345110 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d94b8af-356d-4ee9-9140-14cb9620b86f-trusted-ca\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.345135 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7h2c\" (UniqueName: \"kubernetes.io/projected/8d94b8af-356d-4ee9-9140-14cb9620b86f-kube-api-access-c7h2c\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.345288 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8d94b8af-356d-4ee9-9140-14cb9620b86f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.345348 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8d94b8af-356d-4ee9-9140-14cb9620b86f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.346338 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8d94b8af-356d-4ee9-9140-14cb9620b86f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.346695 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8d94b8af-356d-4ee9-9140-14cb9620b86f-registry-certificates\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.351118 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8d94b8af-356d-4ee9-9140-14cb9620b86f-registry-tls\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.351109 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8d94b8af-356d-4ee9-9140-14cb9620b86f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.355794 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d94b8af-356d-4ee9-9140-14cb9620b86f-trusted-ca\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.360309 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7h2c\" (UniqueName: \"kubernetes.io/projected/8d94b8af-356d-4ee9-9140-14cb9620b86f-kube-api-access-c7h2c\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.363856 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d94b8af-356d-4ee9-9140-14cb9620b86f-bound-sa-token\") pod \"image-registry-66df7c8f76-xx7m2\" (UID: \"8d94b8af-356d-4ee9-9140-14cb9620b86f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.497281 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.928815 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xx7m2"] Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.978002 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4jw55"] Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.978951 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jw55" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.981542 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.986744 4713 generic.go:334] "Generic (PLEG): container finished" podID="a1a9bc74-ffa8-4646-be3e-09cee80a5d04" containerID="098b66f7e9ecd08eea8405c97715eabc228e12a915a884ccd88522f913d5d202" exitCode=0 Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.986847 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwmf8" event={"ID":"a1a9bc74-ffa8-4646-be3e-09cee80a5d04","Type":"ContainerDied","Data":"098b66f7e9ecd08eea8405c97715eabc228e12a915a884ccd88522f913d5d202"} Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.988508 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkffc" event={"ID":"a996c191-52e4-490d-a15a-9def9a651be5","Type":"ContainerStarted","Data":"993aaf2d5b84b240b9b4d4121d96764c0753abf5e276b61a1b7901dd1569f99e"} Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.989281 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" event={"ID":"8d94b8af-356d-4ee9-9140-14cb9620b86f","Type":"ContainerStarted","Data":"a690bb1ac73c1381d835847fa23199cbc7d6179fcf2fbbd9df1d12ed041a5b5d"} Jan 26 15:40:11 crc kubenswrapper[4713]: I0126 15:40:11.999239 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4jw55"] Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.055149 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgthn\" (UniqueName: \"kubernetes.io/projected/3da35423-6430-4e55-83aa-8a99fe5bdf2d-kube-api-access-cgthn\") pod \"redhat-operators-4jw55\" (UID: \"3da35423-6430-4e55-83aa-8a99fe5bdf2d\") " pod="openshift-marketplace/redhat-operators-4jw55" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.055316 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da35423-6430-4e55-83aa-8a99fe5bdf2d-utilities\") pod \"redhat-operators-4jw55\" (UID: \"3da35423-6430-4e55-83aa-8a99fe5bdf2d\") " pod="openshift-marketplace/redhat-operators-4jw55" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.055339 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da35423-6430-4e55-83aa-8a99fe5bdf2d-catalog-content\") pod \"redhat-operators-4jw55\" (UID: \"3da35423-6430-4e55-83aa-8a99fe5bdf2d\") " pod="openshift-marketplace/redhat-operators-4jw55" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.156850 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgthn\" (UniqueName: \"kubernetes.io/projected/3da35423-6430-4e55-83aa-8a99fe5bdf2d-kube-api-access-cgthn\") pod \"redhat-operators-4jw55\" (UID: \"3da35423-6430-4e55-83aa-8a99fe5bdf2d\") " pod="openshift-marketplace/redhat-operators-4jw55" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.156935 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da35423-6430-4e55-83aa-8a99fe5bdf2d-utilities\") pod \"redhat-operators-4jw55\" (UID: \"3da35423-6430-4e55-83aa-8a99fe5bdf2d\") " pod="openshift-marketplace/redhat-operators-4jw55" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.156954 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da35423-6430-4e55-83aa-8a99fe5bdf2d-catalog-content\") pod \"redhat-operators-4jw55\" (UID: \"3da35423-6430-4e55-83aa-8a99fe5bdf2d\") " pod="openshift-marketplace/redhat-operators-4jw55" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.157404 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da35423-6430-4e55-83aa-8a99fe5bdf2d-catalog-content\") pod \"redhat-operators-4jw55\" (UID: \"3da35423-6430-4e55-83aa-8a99fe5bdf2d\") " pod="openshift-marketplace/redhat-operators-4jw55" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.157635 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da35423-6430-4e55-83aa-8a99fe5bdf2d-utilities\") pod \"redhat-operators-4jw55\" (UID: \"3da35423-6430-4e55-83aa-8a99fe5bdf2d\") " pod="openshift-marketplace/redhat-operators-4jw55" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.173590 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgthn\" (UniqueName: \"kubernetes.io/projected/3da35423-6430-4e55-83aa-8a99fe5bdf2d-kube-api-access-cgthn\") pod \"redhat-operators-4jw55\" (UID: \"3da35423-6430-4e55-83aa-8a99fe5bdf2d\") " pod="openshift-marketplace/redhat-operators-4jw55" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.393222 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jw55" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.590408 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jc8sm"] Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.591874 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jc8sm" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.594040 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.594599 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4jw55"] Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.597603 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jc8sm"] Jan 26 15:40:12 crc kubenswrapper[4713]: W0126 15:40:12.599771 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da35423_6430_4e55_83aa_8a99fe5bdf2d.slice/crio-7eb1fb0736b410ee9374c5d44bbebc42c2c3fc2f0cf8821d75927d89a04a184c WatchSource:0}: Error finding container 7eb1fb0736b410ee9374c5d44bbebc42c2c3fc2f0cf8821d75927d89a04a184c: Status 404 returned error can't find the container with id 7eb1fb0736b410ee9374c5d44bbebc42c2c3fc2f0cf8821d75927d89a04a184c Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.663115 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km8r4\" (UniqueName: \"kubernetes.io/projected/b17b3d7f-6672-4596-ad0c-39a9bfac5792-kube-api-access-km8r4\") pod \"community-operators-jc8sm\" (UID: \"b17b3d7f-6672-4596-ad0c-39a9bfac5792\") " pod="openshift-marketplace/community-operators-jc8sm" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.663185 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b17b3d7f-6672-4596-ad0c-39a9bfac5792-utilities\") pod \"community-operators-jc8sm\" (UID: \"b17b3d7f-6672-4596-ad0c-39a9bfac5792\") " pod="openshift-marketplace/community-operators-jc8sm" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.663204 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b17b3d7f-6672-4596-ad0c-39a9bfac5792-catalog-content\") pod \"community-operators-jc8sm\" (UID: \"b17b3d7f-6672-4596-ad0c-39a9bfac5792\") " pod="openshift-marketplace/community-operators-jc8sm" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.764653 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km8r4\" (UniqueName: \"kubernetes.io/projected/b17b3d7f-6672-4596-ad0c-39a9bfac5792-kube-api-access-km8r4\") pod \"community-operators-jc8sm\" (UID: \"b17b3d7f-6672-4596-ad0c-39a9bfac5792\") " pod="openshift-marketplace/community-operators-jc8sm" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.764837 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b17b3d7f-6672-4596-ad0c-39a9bfac5792-utilities\") pod \"community-operators-jc8sm\" (UID: \"b17b3d7f-6672-4596-ad0c-39a9bfac5792\") " pod="openshift-marketplace/community-operators-jc8sm" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.764854 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b17b3d7f-6672-4596-ad0c-39a9bfac5792-catalog-content\") pod \"community-operators-jc8sm\" (UID: \"b17b3d7f-6672-4596-ad0c-39a9bfac5792\") " pod="openshift-marketplace/community-operators-jc8sm" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.765223 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b17b3d7f-6672-4596-ad0c-39a9bfac5792-catalog-content\") pod \"community-operators-jc8sm\" (UID: \"b17b3d7f-6672-4596-ad0c-39a9bfac5792\") " pod="openshift-marketplace/community-operators-jc8sm" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.765346 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b17b3d7f-6672-4596-ad0c-39a9bfac5792-utilities\") pod \"community-operators-jc8sm\" (UID: \"b17b3d7f-6672-4596-ad0c-39a9bfac5792\") " pod="openshift-marketplace/community-operators-jc8sm" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.783998 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km8r4\" (UniqueName: \"kubernetes.io/projected/b17b3d7f-6672-4596-ad0c-39a9bfac5792-kube-api-access-km8r4\") pod \"community-operators-jc8sm\" (UID: \"b17b3d7f-6672-4596-ad0c-39a9bfac5792\") " pod="openshift-marketplace/community-operators-jc8sm" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.929650 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jc8sm" Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.998315 4713 generic.go:334] "Generic (PLEG): container finished" podID="3da35423-6430-4e55-83aa-8a99fe5bdf2d" containerID="3a7972fb8a6399bc1921e216e643464f9a2767fb7ce0c29a9570921526bd7a90" exitCode=0 Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.998421 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jw55" event={"ID":"3da35423-6430-4e55-83aa-8a99fe5bdf2d","Type":"ContainerDied","Data":"3a7972fb8a6399bc1921e216e643464f9a2767fb7ce0c29a9570921526bd7a90"} Jan 26 15:40:12 crc kubenswrapper[4713]: I0126 15:40:12.998451 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jw55" event={"ID":"3da35423-6430-4e55-83aa-8a99fe5bdf2d","Type":"ContainerStarted","Data":"7eb1fb0736b410ee9374c5d44bbebc42c2c3fc2f0cf8821d75927d89a04a184c"} Jan 26 15:40:13 crc kubenswrapper[4713]: I0126 15:40:13.000591 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwmf8" event={"ID":"a1a9bc74-ffa8-4646-be3e-09cee80a5d04","Type":"ContainerStarted","Data":"235f7f6de306d0e99b3f7a3caf398273e32a013e12e991b04cc0086fc01c9843"} Jan 26 15:40:13 crc kubenswrapper[4713]: I0126 15:40:13.004525 4713 generic.go:334] "Generic (PLEG): container finished" podID="a996c191-52e4-490d-a15a-9def9a651be5" containerID="993aaf2d5b84b240b9b4d4121d96764c0753abf5e276b61a1b7901dd1569f99e" exitCode=0 Jan 26 15:40:13 crc kubenswrapper[4713]: I0126 15:40:13.004891 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkffc" event={"ID":"a996c191-52e4-490d-a15a-9def9a651be5","Type":"ContainerDied","Data":"993aaf2d5b84b240b9b4d4121d96764c0753abf5e276b61a1b7901dd1569f99e"} Jan 26 15:40:13 crc kubenswrapper[4713]: I0126 15:40:13.008085 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" event={"ID":"8d94b8af-356d-4ee9-9140-14cb9620b86f","Type":"ContainerStarted","Data":"bf07ed7e3abf66dd522e9ff1a2424c76cfaf725d32e976aba8f4f1eb7d96c33f"} Jan 26 15:40:13 crc kubenswrapper[4713]: I0126 15:40:13.008922 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:13 crc kubenswrapper[4713]: I0126 15:40:13.108790 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" podStartSLOduration=2.108769583 podStartE2EDuration="2.108769583s" podCreationTimestamp="2026-01-26 15:40:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:40:13.103899971 +0000 UTC m=+388.240917206" watchObservedRunningTime="2026-01-26 15:40:13.108769583 +0000 UTC m=+388.245786818" Jan 26 15:40:13 crc kubenswrapper[4713]: I0126 15:40:13.408712 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jc8sm"] Jan 26 15:40:13 crc kubenswrapper[4713]: W0126 15:40:13.418036 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb17b3d7f_6672_4596_ad0c_39a9bfac5792.slice/crio-be9e9020049025fcd7ba3a2cdd1808f1ee2421f9376e1454374b4f2d092f4548 WatchSource:0}: Error finding container be9e9020049025fcd7ba3a2cdd1808f1ee2421f9376e1454374b4f2d092f4548: Status 404 returned error can't find the container with id be9e9020049025fcd7ba3a2cdd1808f1ee2421f9376e1454374b4f2d092f4548 Jan 26 15:40:14 crc kubenswrapper[4713]: I0126 15:40:14.014793 4713 generic.go:334] "Generic (PLEG): container finished" podID="a1a9bc74-ffa8-4646-be3e-09cee80a5d04" containerID="235f7f6de306d0e99b3f7a3caf398273e32a013e12e991b04cc0086fc01c9843" exitCode=0 Jan 26 15:40:14 crc kubenswrapper[4713]: I0126 15:40:14.014888 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwmf8" event={"ID":"a1a9bc74-ffa8-4646-be3e-09cee80a5d04","Type":"ContainerDied","Data":"235f7f6de306d0e99b3f7a3caf398273e32a013e12e991b04cc0086fc01c9843"} Jan 26 15:40:14 crc kubenswrapper[4713]: I0126 15:40:14.016851 4713 generic.go:334] "Generic (PLEG): container finished" podID="b17b3d7f-6672-4596-ad0c-39a9bfac5792" containerID="cc281de180643ee416449f9d53d60bd33ac0ec52516f77a620cdb183bb78c11f" exitCode=0 Jan 26 15:40:14 crc kubenswrapper[4713]: I0126 15:40:14.016888 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jc8sm" event={"ID":"b17b3d7f-6672-4596-ad0c-39a9bfac5792","Type":"ContainerDied","Data":"cc281de180643ee416449f9d53d60bd33ac0ec52516f77a620cdb183bb78c11f"} Jan 26 15:40:14 crc kubenswrapper[4713]: I0126 15:40:14.016928 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jc8sm" event={"ID":"b17b3d7f-6672-4596-ad0c-39a9bfac5792","Type":"ContainerStarted","Data":"be9e9020049025fcd7ba3a2cdd1808f1ee2421f9376e1454374b4f2d092f4548"} Jan 26 15:40:14 crc kubenswrapper[4713]: I0126 15:40:14.021192 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkffc" event={"ID":"a996c191-52e4-490d-a15a-9def9a651be5","Type":"ContainerStarted","Data":"ebedd1427b86c78963bcf2c23c02fb17060a0e102f72ca1c93885d5f1f3a4fe1"} Jan 26 15:40:14 crc kubenswrapper[4713]: I0126 15:40:14.024668 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jw55" event={"ID":"3da35423-6430-4e55-83aa-8a99fe5bdf2d","Type":"ContainerStarted","Data":"8a9512e87827df7d5a0d61dfd562b12350505bc66f6fc7569ac2b79a5372ffa3"} Jan 26 15:40:14 crc kubenswrapper[4713]: I0126 15:40:14.082352 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kkffc" podStartSLOduration=2.60143913 podStartE2EDuration="5.08233035s" podCreationTimestamp="2026-01-26 15:40:09 +0000 UTC" firstStartedPulling="2026-01-26 15:40:10.977100951 +0000 UTC m=+386.114118186" lastFinishedPulling="2026-01-26 15:40:13.457992171 +0000 UTC m=+388.595009406" observedRunningTime="2026-01-26 15:40:14.079329736 +0000 UTC m=+389.216346981" watchObservedRunningTime="2026-01-26 15:40:14.08233035 +0000 UTC m=+389.219347585" Jan 26 15:40:15 crc kubenswrapper[4713]: I0126 15:40:15.037715 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jc8sm" event={"ID":"b17b3d7f-6672-4596-ad0c-39a9bfac5792","Type":"ContainerStarted","Data":"6d85273cf36aae690d85d4fec9f522d4476e384bdd67d6b4e5770c967ea42628"} Jan 26 15:40:15 crc kubenswrapper[4713]: I0126 15:40:15.040163 4713 generic.go:334] "Generic (PLEG): container finished" podID="3da35423-6430-4e55-83aa-8a99fe5bdf2d" containerID="8a9512e87827df7d5a0d61dfd562b12350505bc66f6fc7569ac2b79a5372ffa3" exitCode=0 Jan 26 15:40:15 crc kubenswrapper[4713]: I0126 15:40:15.042503 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jw55" event={"ID":"3da35423-6430-4e55-83aa-8a99fe5bdf2d","Type":"ContainerDied","Data":"8a9512e87827df7d5a0d61dfd562b12350505bc66f6fc7569ac2b79a5372ffa3"} Jan 26 15:40:15 crc kubenswrapper[4713]: I0126 15:40:15.049312 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwmf8" event={"ID":"a1a9bc74-ffa8-4646-be3e-09cee80a5d04","Type":"ContainerStarted","Data":"123e24cbbe59d7332feaea766e31642f2273e0279017fc4b5fd0056c89f469a5"} Jan 26 15:40:15 crc kubenswrapper[4713]: I0126 15:40:15.110913 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dwmf8" podStartSLOduration=2.647395639 podStartE2EDuration="5.110894996s" podCreationTimestamp="2026-01-26 15:40:10 +0000 UTC" firstStartedPulling="2026-01-26 15:40:11.990657447 +0000 UTC m=+387.127674682" lastFinishedPulling="2026-01-26 15:40:14.454156804 +0000 UTC m=+389.591174039" observedRunningTime="2026-01-26 15:40:15.108948355 +0000 UTC m=+390.245965600" watchObservedRunningTime="2026-01-26 15:40:15.110894996 +0000 UTC m=+390.247912231" Jan 26 15:40:16 crc kubenswrapper[4713]: I0126 15:40:16.053788 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jw55" event={"ID":"3da35423-6430-4e55-83aa-8a99fe5bdf2d","Type":"ContainerStarted","Data":"6b221e76ac5ec1b836de24908e93a326bf5b0e07345e54bebc6ef883fbb7e78a"} Jan 26 15:40:16 crc kubenswrapper[4713]: I0126 15:40:16.055730 4713 generic.go:334] "Generic (PLEG): container finished" podID="b17b3d7f-6672-4596-ad0c-39a9bfac5792" containerID="6d85273cf36aae690d85d4fec9f522d4476e384bdd67d6b4e5770c967ea42628" exitCode=0 Jan 26 15:40:16 crc kubenswrapper[4713]: I0126 15:40:16.055821 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jc8sm" event={"ID":"b17b3d7f-6672-4596-ad0c-39a9bfac5792","Type":"ContainerDied","Data":"6d85273cf36aae690d85d4fec9f522d4476e384bdd67d6b4e5770c967ea42628"} Jan 26 15:40:16 crc kubenswrapper[4713]: I0126 15:40:16.099780 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4jw55" podStartSLOduration=2.576478941 podStartE2EDuration="5.099765436s" podCreationTimestamp="2026-01-26 15:40:11 +0000 UTC" firstStartedPulling="2026-01-26 15:40:13.002571083 +0000 UTC m=+388.139588358" lastFinishedPulling="2026-01-26 15:40:15.525857618 +0000 UTC m=+390.662874853" observedRunningTime="2026-01-26 15:40:16.079589237 +0000 UTC m=+391.216606492" watchObservedRunningTime="2026-01-26 15:40:16.099765436 +0000 UTC m=+391.236782671" Jan 26 15:40:17 crc kubenswrapper[4713]: I0126 15:40:17.077294 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jc8sm" event={"ID":"b17b3d7f-6672-4596-ad0c-39a9bfac5792","Type":"ContainerStarted","Data":"18ca1607d7d5759e6342c9e55bac13fb61c928bb2b3c0fe442403f90ef8d3e22"} Jan 26 15:40:17 crc kubenswrapper[4713]: I0126 15:40:17.096552 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jc8sm" podStartSLOduration=2.625434844 podStartE2EDuration="5.096537128s" podCreationTimestamp="2026-01-26 15:40:12 +0000 UTC" firstStartedPulling="2026-01-26 15:40:14.018152733 +0000 UTC m=+389.155169968" lastFinishedPulling="2026-01-26 15:40:16.489255007 +0000 UTC m=+391.626272252" observedRunningTime="2026-01-26 15:40:17.095825965 +0000 UTC m=+392.232843200" watchObservedRunningTime="2026-01-26 15:40:17.096537128 +0000 UTC m=+392.233554363" Jan 26 15:40:19 crc kubenswrapper[4713]: I0126 15:40:19.902106 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kkffc" Jan 26 15:40:19 crc kubenswrapper[4713]: I0126 15:40:19.902662 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kkffc" Jan 26 15:40:19 crc kubenswrapper[4713]: I0126 15:40:19.977457 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kkffc" Jan 26 15:40:20 crc kubenswrapper[4713]: I0126 15:40:20.129516 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kkffc" Jan 26 15:40:20 crc kubenswrapper[4713]: I0126 15:40:20.508522 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dwmf8" Jan 26 15:40:20 crc kubenswrapper[4713]: I0126 15:40:20.508587 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dwmf8" Jan 26 15:40:20 crc kubenswrapper[4713]: I0126 15:40:20.548375 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dwmf8" Jan 26 15:40:21 crc kubenswrapper[4713]: I0126 15:40:21.148182 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dwmf8" Jan 26 15:40:22 crc kubenswrapper[4713]: I0126 15:40:22.394323 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4jw55" Jan 26 15:40:22 crc kubenswrapper[4713]: I0126 15:40:22.395025 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4jw55" Jan 26 15:40:22 crc kubenswrapper[4713]: I0126 15:40:22.444627 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4jw55" Jan 26 15:40:22 crc kubenswrapper[4713]: I0126 15:40:22.930576 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jc8sm" Jan 26 15:40:22 crc kubenswrapper[4713]: I0126 15:40:22.931080 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jc8sm" Jan 26 15:40:22 crc kubenswrapper[4713]: I0126 15:40:22.984970 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jc8sm" Jan 26 15:40:23 crc kubenswrapper[4713]: I0126 15:40:23.142124 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jc8sm" Jan 26 15:40:23 crc kubenswrapper[4713]: I0126 15:40:23.147223 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4jw55" Jan 26 15:40:31 crc kubenswrapper[4713]: I0126 15:40:31.509499 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xx7m2" Jan 26 15:40:31 crc kubenswrapper[4713]: I0126 15:40:31.573652 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j6h8x"] Jan 26 15:40:33 crc kubenswrapper[4713]: I0126 15:40:33.301769 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:40:33 crc kubenswrapper[4713]: I0126 15:40:33.302578 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:40:56 crc kubenswrapper[4713]: I0126 15:40:56.609894 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" podUID="3e40f73a-b547-4c3f-a7a7-125032576150" containerName="registry" containerID="cri-o://a5bdd99a7f029c52cd28a97dc2dcc96201d99ace395a15b5fbf017e25044cff6" gracePeriod=30 Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.036707 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.122209 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-bound-sa-token\") pod \"3e40f73a-b547-4c3f-a7a7-125032576150\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.122667 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3e40f73a-b547-4c3f-a7a7-125032576150-registry-certificates\") pod \"3e40f73a-b547-4c3f-a7a7-125032576150\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.122763 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3e40f73a-b547-4c3f-a7a7-125032576150-installation-pull-secrets\") pod \"3e40f73a-b547-4c3f-a7a7-125032576150\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.122812 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-registry-tls\") pod \"3e40f73a-b547-4c3f-a7a7-125032576150\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.122928 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3e40f73a-b547-4c3f-a7a7-125032576150-trusted-ca\") pod \"3e40f73a-b547-4c3f-a7a7-125032576150\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.122976 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqblt\" (UniqueName: \"kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-kube-api-access-dqblt\") pod \"3e40f73a-b547-4c3f-a7a7-125032576150\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.123084 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3e40f73a-b547-4c3f-a7a7-125032576150-ca-trust-extracted\") pod \"3e40f73a-b547-4c3f-a7a7-125032576150\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.123298 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"3e40f73a-b547-4c3f-a7a7-125032576150\" (UID: \"3e40f73a-b547-4c3f-a7a7-125032576150\") " Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.123971 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e40f73a-b547-4c3f-a7a7-125032576150-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3e40f73a-b547-4c3f-a7a7-125032576150" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.125680 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e40f73a-b547-4c3f-a7a7-125032576150-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "3e40f73a-b547-4c3f-a7a7-125032576150" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.131695 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "3e40f73a-b547-4c3f-a7a7-125032576150" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.131989 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "3e40f73a-b547-4c3f-a7a7-125032576150" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.132496 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-kube-api-access-dqblt" (OuterVolumeSpecName: "kube-api-access-dqblt") pod "3e40f73a-b547-4c3f-a7a7-125032576150" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150"). InnerVolumeSpecName "kube-api-access-dqblt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.137924 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e40f73a-b547-4c3f-a7a7-125032576150-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "3e40f73a-b547-4c3f-a7a7-125032576150" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.140947 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "3e40f73a-b547-4c3f-a7a7-125032576150" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.149643 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e40f73a-b547-4c3f-a7a7-125032576150-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "3e40f73a-b547-4c3f-a7a7-125032576150" (UID: "3e40f73a-b547-4c3f-a7a7-125032576150"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.227189 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqblt\" (UniqueName: \"kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-kube-api-access-dqblt\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.227259 4713 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3e40f73a-b547-4c3f-a7a7-125032576150-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.227279 4713 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3e40f73a-b547-4c3f-a7a7-125032576150-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.227296 4713 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.227314 4713 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3e40f73a-b547-4c3f-a7a7-125032576150-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.227334 4713 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3e40f73a-b547-4c3f-a7a7-125032576150-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.227350 4713 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3e40f73a-b547-4c3f-a7a7-125032576150-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.326566 4713 generic.go:334] "Generic (PLEG): container finished" podID="3e40f73a-b547-4c3f-a7a7-125032576150" containerID="a5bdd99a7f029c52cd28a97dc2dcc96201d99ace395a15b5fbf017e25044cff6" exitCode=0 Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.326629 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" event={"ID":"3e40f73a-b547-4c3f-a7a7-125032576150","Type":"ContainerDied","Data":"a5bdd99a7f029c52cd28a97dc2dcc96201d99ace395a15b5fbf017e25044cff6"} Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.326671 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" event={"ID":"3e40f73a-b547-4c3f-a7a7-125032576150","Type":"ContainerDied","Data":"d5d0021e4a978e34f609b28b960df750e646356160cb370a7dd831dce2a85660"} Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.326701 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j6h8x" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.326725 4713 scope.go:117] "RemoveContainer" containerID="a5bdd99a7f029c52cd28a97dc2dcc96201d99ace395a15b5fbf017e25044cff6" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.349054 4713 scope.go:117] "RemoveContainer" containerID="a5bdd99a7f029c52cd28a97dc2dcc96201d99ace395a15b5fbf017e25044cff6" Jan 26 15:40:57 crc kubenswrapper[4713]: E0126 15:40:57.349595 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5bdd99a7f029c52cd28a97dc2dcc96201d99ace395a15b5fbf017e25044cff6\": container with ID starting with a5bdd99a7f029c52cd28a97dc2dcc96201d99ace395a15b5fbf017e25044cff6 not found: ID does not exist" containerID="a5bdd99a7f029c52cd28a97dc2dcc96201d99ace395a15b5fbf017e25044cff6" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.349660 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5bdd99a7f029c52cd28a97dc2dcc96201d99ace395a15b5fbf017e25044cff6"} err="failed to get container status \"a5bdd99a7f029c52cd28a97dc2dcc96201d99ace395a15b5fbf017e25044cff6\": rpc error: code = NotFound desc = could not find container \"a5bdd99a7f029c52cd28a97dc2dcc96201d99ace395a15b5fbf017e25044cff6\": container with ID starting with a5bdd99a7f029c52cd28a97dc2dcc96201d99ace395a15b5fbf017e25044cff6 not found: ID does not exist" Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.382534 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j6h8x"] Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.386960 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j6h8x"] Jan 26 15:40:57 crc kubenswrapper[4713]: I0126 15:40:57.814916 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e40f73a-b547-4c3f-a7a7-125032576150" path="/var/lib/kubelet/pods/3e40f73a-b547-4c3f-a7a7-125032576150/volumes" Jan 26 15:41:03 crc kubenswrapper[4713]: I0126 15:41:03.301500 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:41:03 crc kubenswrapper[4713]: I0126 15:41:03.301850 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:41:03 crc kubenswrapper[4713]: I0126 15:41:03.301916 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:41:03 crc kubenswrapper[4713]: I0126 15:41:03.302818 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c0b34afb80cf07b93de951a7abde48be6ab6179835763ec54e5cb9bb0493d59"} pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:41:03 crc kubenswrapper[4713]: I0126 15:41:03.302919 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" containerID="cri-o://3c0b34afb80cf07b93de951a7abde48be6ab6179835763ec54e5cb9bb0493d59" gracePeriod=600 Jan 26 15:41:04 crc kubenswrapper[4713]: I0126 15:41:04.386662 4713 generic.go:334] "Generic (PLEG): container finished" podID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerID="3c0b34afb80cf07b93de951a7abde48be6ab6179835763ec54e5cb9bb0493d59" exitCode=0 Jan 26 15:41:04 crc kubenswrapper[4713]: I0126 15:41:04.387164 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerDied","Data":"3c0b34afb80cf07b93de951a7abde48be6ab6179835763ec54e5cb9bb0493d59"} Jan 26 15:41:04 crc kubenswrapper[4713]: I0126 15:41:04.387191 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"3e0fa4d07dcfba7f5a3ed7a1e97bd343e126e54befe0b6192998369cbeb3fa98"} Jan 26 15:41:04 crc kubenswrapper[4713]: I0126 15:41:04.387208 4713 scope.go:117] "RemoveContainer" containerID="38fa7c9dbb11cc947dc564713ebef59164ab5753c50df336bcf83ff0e06f8c2c" Jan 26 15:43:33 crc kubenswrapper[4713]: I0126 15:43:33.301946 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:43:33 crc kubenswrapper[4713]: I0126 15:43:33.302573 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:44:03 crc kubenswrapper[4713]: I0126 15:44:03.301188 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:44:03 crc kubenswrapper[4713]: I0126 15:44:03.301875 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:44:33 crc kubenswrapper[4713]: I0126 15:44:33.302035 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:44:33 crc kubenswrapper[4713]: I0126 15:44:33.303498 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:44:33 crc kubenswrapper[4713]: I0126 15:44:33.303564 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:44:33 crc kubenswrapper[4713]: I0126 15:44:33.304194 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e0fa4d07dcfba7f5a3ed7a1e97bd343e126e54befe0b6192998369cbeb3fa98"} pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:44:33 crc kubenswrapper[4713]: I0126 15:44:33.304266 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" containerID="cri-o://3e0fa4d07dcfba7f5a3ed7a1e97bd343e126e54befe0b6192998369cbeb3fa98" gracePeriod=600 Jan 26 15:44:33 crc kubenswrapper[4713]: E0126 15:44:33.417654 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf608dd80_4cbf_4490_b062_2bef233d25d1.slice/crio-conmon-3e0fa4d07dcfba7f5a3ed7a1e97bd343e126e54befe0b6192998369cbeb3fa98.scope\": RecentStats: unable to find data in memory cache]" Jan 26 15:44:33 crc kubenswrapper[4713]: I0126 15:44:33.635697 4713 generic.go:334] "Generic (PLEG): container finished" podID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerID="3e0fa4d07dcfba7f5a3ed7a1e97bd343e126e54befe0b6192998369cbeb3fa98" exitCode=0 Jan 26 15:44:33 crc kubenswrapper[4713]: I0126 15:44:33.635728 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerDied","Data":"3e0fa4d07dcfba7f5a3ed7a1e97bd343e126e54befe0b6192998369cbeb3fa98"} Jan 26 15:44:33 crc kubenswrapper[4713]: I0126 15:44:33.636107 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"8f32da0ac0a9f06d791f2d1090c2ad8ad38bcf46a578523616f1cb9902d73f6a"} Jan 26 15:44:33 crc kubenswrapper[4713]: I0126 15:44:33.636125 4713 scope.go:117] "RemoveContainer" containerID="3c0b34afb80cf07b93de951a7abde48be6ab6179835763ec54e5cb9bb0493d59" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.184917 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8"] Jan 26 15:45:00 crc kubenswrapper[4713]: E0126 15:45:00.186232 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e40f73a-b547-4c3f-a7a7-125032576150" containerName="registry" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.186266 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e40f73a-b547-4c3f-a7a7-125032576150" containerName="registry" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.186567 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e40f73a-b547-4c3f-a7a7-125032576150" containerName="registry" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.187433 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.190979 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8"] Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.191894 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.193689 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.270656 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a6a4266-769b-46d9-b5f2-6873207578ba-config-volume\") pod \"collect-profiles-29490705-7s7r8\" (UID: \"8a6a4266-769b-46d9-b5f2-6873207578ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.270722 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kjss\" (UniqueName: \"kubernetes.io/projected/8a6a4266-769b-46d9-b5f2-6873207578ba-kube-api-access-7kjss\") pod \"collect-profiles-29490705-7s7r8\" (UID: \"8a6a4266-769b-46d9-b5f2-6873207578ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.270765 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a6a4266-769b-46d9-b5f2-6873207578ba-secret-volume\") pod \"collect-profiles-29490705-7s7r8\" (UID: \"8a6a4266-769b-46d9-b5f2-6873207578ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.372251 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a6a4266-769b-46d9-b5f2-6873207578ba-secret-volume\") pod \"collect-profiles-29490705-7s7r8\" (UID: \"8a6a4266-769b-46d9-b5f2-6873207578ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.372496 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a6a4266-769b-46d9-b5f2-6873207578ba-config-volume\") pod \"collect-profiles-29490705-7s7r8\" (UID: \"8a6a4266-769b-46d9-b5f2-6873207578ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.372611 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kjss\" (UniqueName: \"kubernetes.io/projected/8a6a4266-769b-46d9-b5f2-6873207578ba-kube-api-access-7kjss\") pod \"collect-profiles-29490705-7s7r8\" (UID: \"8a6a4266-769b-46d9-b5f2-6873207578ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.373840 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a6a4266-769b-46d9-b5f2-6873207578ba-config-volume\") pod \"collect-profiles-29490705-7s7r8\" (UID: \"8a6a4266-769b-46d9-b5f2-6873207578ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.380780 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a6a4266-769b-46d9-b5f2-6873207578ba-secret-volume\") pod \"collect-profiles-29490705-7s7r8\" (UID: \"8a6a4266-769b-46d9-b5f2-6873207578ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.402252 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kjss\" (UniqueName: \"kubernetes.io/projected/8a6a4266-769b-46d9-b5f2-6873207578ba-kube-api-access-7kjss\") pod \"collect-profiles-29490705-7s7r8\" (UID: \"8a6a4266-769b-46d9-b5f2-6873207578ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.503843 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.754988 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8"] Jan 26 15:45:00 crc kubenswrapper[4713]: I0126 15:45:00.809224 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" event={"ID":"8a6a4266-769b-46d9-b5f2-6873207578ba","Type":"ContainerStarted","Data":"ff3e79a7e4c79072a6689f7785419733927c55bb347b5ecdebb89a9b615d9a85"} Jan 26 15:45:01 crc kubenswrapper[4713]: I0126 15:45:01.818554 4713 generic.go:334] "Generic (PLEG): container finished" podID="8a6a4266-769b-46d9-b5f2-6873207578ba" containerID="bd59df16ea7405eeb2bfb5a0db37e78a81e48fbc6280327281878376128a76f0" exitCode=0 Jan 26 15:45:01 crc kubenswrapper[4713]: I0126 15:45:01.818620 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" event={"ID":"8a6a4266-769b-46d9-b5f2-6873207578ba","Type":"ContainerDied","Data":"bd59df16ea7405eeb2bfb5a0db37e78a81e48fbc6280327281878376128a76f0"} Jan 26 15:45:03 crc kubenswrapper[4713]: I0126 15:45:03.045725 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" Jan 26 15:45:03 crc kubenswrapper[4713]: I0126 15:45:03.215199 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a6a4266-769b-46d9-b5f2-6873207578ba-secret-volume\") pod \"8a6a4266-769b-46d9-b5f2-6873207578ba\" (UID: \"8a6a4266-769b-46d9-b5f2-6873207578ba\") " Jan 26 15:45:03 crc kubenswrapper[4713]: I0126 15:45:03.215248 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a6a4266-769b-46d9-b5f2-6873207578ba-config-volume\") pod \"8a6a4266-769b-46d9-b5f2-6873207578ba\" (UID: \"8a6a4266-769b-46d9-b5f2-6873207578ba\") " Jan 26 15:45:03 crc kubenswrapper[4713]: I0126 15:45:03.215310 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kjss\" (UniqueName: \"kubernetes.io/projected/8a6a4266-769b-46d9-b5f2-6873207578ba-kube-api-access-7kjss\") pod \"8a6a4266-769b-46d9-b5f2-6873207578ba\" (UID: \"8a6a4266-769b-46d9-b5f2-6873207578ba\") " Jan 26 15:45:03 crc kubenswrapper[4713]: I0126 15:45:03.216089 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a6a4266-769b-46d9-b5f2-6873207578ba-config-volume" (OuterVolumeSpecName: "config-volume") pod "8a6a4266-769b-46d9-b5f2-6873207578ba" (UID: "8a6a4266-769b-46d9-b5f2-6873207578ba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:45:03 crc kubenswrapper[4713]: I0126 15:45:03.216624 4713 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a6a4266-769b-46d9-b5f2-6873207578ba-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:03 crc kubenswrapper[4713]: I0126 15:45:03.223233 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a6a4266-769b-46d9-b5f2-6873207578ba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8a6a4266-769b-46d9-b5f2-6873207578ba" (UID: "8a6a4266-769b-46d9-b5f2-6873207578ba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:45:03 crc kubenswrapper[4713]: I0126 15:45:03.223308 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a6a4266-769b-46d9-b5f2-6873207578ba-kube-api-access-7kjss" (OuterVolumeSpecName: "kube-api-access-7kjss") pod "8a6a4266-769b-46d9-b5f2-6873207578ba" (UID: "8a6a4266-769b-46d9-b5f2-6873207578ba"). InnerVolumeSpecName "kube-api-access-7kjss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:45:03 crc kubenswrapper[4713]: I0126 15:45:03.318209 4713 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a6a4266-769b-46d9-b5f2-6873207578ba-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:03 crc kubenswrapper[4713]: I0126 15:45:03.318266 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kjss\" (UniqueName: \"kubernetes.io/projected/8a6a4266-769b-46d9-b5f2-6873207578ba-kube-api-access-7kjss\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:03 crc kubenswrapper[4713]: I0126 15:45:03.833156 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" event={"ID":"8a6a4266-769b-46d9-b5f2-6873207578ba","Type":"ContainerDied","Data":"ff3e79a7e4c79072a6689f7785419733927c55bb347b5ecdebb89a9b615d9a85"} Jan 26 15:45:03 crc kubenswrapper[4713]: I0126 15:45:03.833208 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff3e79a7e4c79072a6689f7785419733927c55bb347b5ecdebb89a9b615d9a85" Jan 26 15:45:03 crc kubenswrapper[4713]: I0126 15:45:03.833217 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8" Jan 26 15:45:03 crc kubenswrapper[4713]: E0126 15:45:03.900942 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a6a4266_769b_46d9_b5f2_6873207578ba.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a6a4266_769b_46d9_b5f2_6873207578ba.slice/crio-ff3e79a7e4c79072a6689f7785419733927c55bb347b5ecdebb89a9b615d9a85\": RecentStats: unable to find data in memory cache]" Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.556858 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm"] Jan 26 15:46:05 crc kubenswrapper[4713]: E0126 15:46:05.557570 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a6a4266-769b-46d9-b5f2-6873207578ba" containerName="collect-profiles" Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.557583 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a6a4266-769b-46d9-b5f2-6873207578ba" containerName="collect-profiles" Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.557684 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a6a4266-769b-46d9-b5f2-6873207578ba" containerName="collect-profiles" Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.558386 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.567260 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.572901 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm"] Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.689839 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf68v\" (UniqueName: \"kubernetes.io/projected/4f30d0d9-a953-4da6-be6b-32fc986c16ae-kube-api-access-gf68v\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm\" (UID: \"4f30d0d9-a953-4da6-be6b-32fc986c16ae\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.689925 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f30d0d9-a953-4da6-be6b-32fc986c16ae-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm\" (UID: \"4f30d0d9-a953-4da6-be6b-32fc986c16ae\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.690566 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f30d0d9-a953-4da6-be6b-32fc986c16ae-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm\" (UID: \"4f30d0d9-a953-4da6-be6b-32fc986c16ae\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.791705 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf68v\" (UniqueName: \"kubernetes.io/projected/4f30d0d9-a953-4da6-be6b-32fc986c16ae-kube-api-access-gf68v\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm\" (UID: \"4f30d0d9-a953-4da6-be6b-32fc986c16ae\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.791772 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f30d0d9-a953-4da6-be6b-32fc986c16ae-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm\" (UID: \"4f30d0d9-a953-4da6-be6b-32fc986c16ae\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.791811 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f30d0d9-a953-4da6-be6b-32fc986c16ae-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm\" (UID: \"4f30d0d9-a953-4da6-be6b-32fc986c16ae\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.792294 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f30d0d9-a953-4da6-be6b-32fc986c16ae-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm\" (UID: \"4f30d0d9-a953-4da6-be6b-32fc986c16ae\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.792413 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f30d0d9-a953-4da6-be6b-32fc986c16ae-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm\" (UID: \"4f30d0d9-a953-4da6-be6b-32fc986c16ae\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.812763 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf68v\" (UniqueName: \"kubernetes.io/projected/4f30d0d9-a953-4da6-be6b-32fc986c16ae-kube-api-access-gf68v\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm\" (UID: \"4f30d0d9-a953-4da6-be6b-32fc986c16ae\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" Jan 26 15:46:05 crc kubenswrapper[4713]: I0126 15:46:05.876263 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" Jan 26 15:46:06 crc kubenswrapper[4713]: I0126 15:46:06.076261 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm"] Jan 26 15:46:06 crc kubenswrapper[4713]: I0126 15:46:06.334188 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" event={"ID":"4f30d0d9-a953-4da6-be6b-32fc986c16ae","Type":"ContainerStarted","Data":"1493f16a528f4df1ab1ee96056244d27eaa76a1051ecf9f7f7f422ebde55d9d9"} Jan 26 15:46:06 crc kubenswrapper[4713]: I0126 15:46:06.334642 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" event={"ID":"4f30d0d9-a953-4da6-be6b-32fc986c16ae","Type":"ContainerStarted","Data":"c1bf77c5fb2d830582b881f2cd2fb086aae81d3f348369787c6b1d578ac4da80"} Jan 26 15:46:07 crc kubenswrapper[4713]: I0126 15:46:07.344019 4713 generic.go:334] "Generic (PLEG): container finished" podID="4f30d0d9-a953-4da6-be6b-32fc986c16ae" containerID="1493f16a528f4df1ab1ee96056244d27eaa76a1051ecf9f7f7f422ebde55d9d9" exitCode=0 Jan 26 15:46:07 crc kubenswrapper[4713]: I0126 15:46:07.344090 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" event={"ID":"4f30d0d9-a953-4da6-be6b-32fc986c16ae","Type":"ContainerDied","Data":"1493f16a528f4df1ab1ee96056244d27eaa76a1051ecf9f7f7f422ebde55d9d9"} Jan 26 15:46:07 crc kubenswrapper[4713]: I0126 15:46:07.346486 4713 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:46:09 crc kubenswrapper[4713]: I0126 15:46:09.356869 4713 generic.go:334] "Generic (PLEG): container finished" podID="4f30d0d9-a953-4da6-be6b-32fc986c16ae" containerID="294df03e7d96280b24099c6e5da86e97974cccfeaac4ca7010f57848e84be7e6" exitCode=0 Jan 26 15:46:09 crc kubenswrapper[4713]: I0126 15:46:09.356954 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" event={"ID":"4f30d0d9-a953-4da6-be6b-32fc986c16ae","Type":"ContainerDied","Data":"294df03e7d96280b24099c6e5da86e97974cccfeaac4ca7010f57848e84be7e6"} Jan 26 15:46:10 crc kubenswrapper[4713]: I0126 15:46:10.367969 4713 generic.go:334] "Generic (PLEG): container finished" podID="4f30d0d9-a953-4da6-be6b-32fc986c16ae" containerID="211c09fac352630a8860d2f0b6b883ba9d85e9a144853bb54c8a1de04871b767" exitCode=0 Jan 26 15:46:10 crc kubenswrapper[4713]: I0126 15:46:10.368061 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" event={"ID":"4f30d0d9-a953-4da6-be6b-32fc986c16ae","Type":"ContainerDied","Data":"211c09fac352630a8860d2f0b6b883ba9d85e9a144853bb54c8a1de04871b767"} Jan 26 15:46:11 crc kubenswrapper[4713]: I0126 15:46:11.605904 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" Jan 26 15:46:11 crc kubenswrapper[4713]: I0126 15:46:11.700245 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf68v\" (UniqueName: \"kubernetes.io/projected/4f30d0d9-a953-4da6-be6b-32fc986c16ae-kube-api-access-gf68v\") pod \"4f30d0d9-a953-4da6-be6b-32fc986c16ae\" (UID: \"4f30d0d9-a953-4da6-be6b-32fc986c16ae\") " Jan 26 15:46:11 crc kubenswrapper[4713]: I0126 15:46:11.700329 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f30d0d9-a953-4da6-be6b-32fc986c16ae-bundle\") pod \"4f30d0d9-a953-4da6-be6b-32fc986c16ae\" (UID: \"4f30d0d9-a953-4da6-be6b-32fc986c16ae\") " Jan 26 15:46:11 crc kubenswrapper[4713]: I0126 15:46:11.700437 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f30d0d9-a953-4da6-be6b-32fc986c16ae-util\") pod \"4f30d0d9-a953-4da6-be6b-32fc986c16ae\" (UID: \"4f30d0d9-a953-4da6-be6b-32fc986c16ae\") " Jan 26 15:46:11 crc kubenswrapper[4713]: I0126 15:46:11.702895 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f30d0d9-a953-4da6-be6b-32fc986c16ae-bundle" (OuterVolumeSpecName: "bundle") pod "4f30d0d9-a953-4da6-be6b-32fc986c16ae" (UID: "4f30d0d9-a953-4da6-be6b-32fc986c16ae"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:46:11 crc kubenswrapper[4713]: I0126 15:46:11.706975 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f30d0d9-a953-4da6-be6b-32fc986c16ae-kube-api-access-gf68v" (OuterVolumeSpecName: "kube-api-access-gf68v") pod "4f30d0d9-a953-4da6-be6b-32fc986c16ae" (UID: "4f30d0d9-a953-4da6-be6b-32fc986c16ae"). InnerVolumeSpecName "kube-api-access-gf68v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:46:11 crc kubenswrapper[4713]: I0126 15:46:11.711301 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f30d0d9-a953-4da6-be6b-32fc986c16ae-util" (OuterVolumeSpecName: "util") pod "4f30d0d9-a953-4da6-be6b-32fc986c16ae" (UID: "4f30d0d9-a953-4da6-be6b-32fc986c16ae"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:46:11 crc kubenswrapper[4713]: I0126 15:46:11.802333 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf68v\" (UniqueName: \"kubernetes.io/projected/4f30d0d9-a953-4da6-be6b-32fc986c16ae-kube-api-access-gf68v\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:11 crc kubenswrapper[4713]: I0126 15:46:11.802430 4713 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f30d0d9-a953-4da6-be6b-32fc986c16ae-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:11 crc kubenswrapper[4713]: I0126 15:46:11.802451 4713 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f30d0d9-a953-4da6-be6b-32fc986c16ae-util\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:12 crc kubenswrapper[4713]: I0126 15:46:12.397372 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" event={"ID":"4f30d0d9-a953-4da6-be6b-32fc986c16ae","Type":"ContainerDied","Data":"c1bf77c5fb2d830582b881f2cd2fb086aae81d3f348369787c6b1d578ac4da80"} Jan 26 15:46:12 crc kubenswrapper[4713]: I0126 15:46:12.397416 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1bf77c5fb2d830582b881f2cd2fb086aae81d3f348369787c6b1d578ac4da80" Jan 26 15:46:12 crc kubenswrapper[4713]: I0126 15:46:12.397508 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm" Jan 26 15:46:16 crc kubenswrapper[4713]: I0126 15:46:16.398903 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2drw2"] Jan 26 15:46:16 crc kubenswrapper[4713]: I0126 15:46:16.399729 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovn-controller" containerID="cri-o://543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0" gracePeriod=30 Jan 26 15:46:16 crc kubenswrapper[4713]: I0126 15:46:16.399869 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovn-acl-logging" containerID="cri-o://bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff" gracePeriod=30 Jan 26 15:46:16 crc kubenswrapper[4713]: I0126 15:46:16.399937 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="sbdb" containerID="cri-o://1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c" gracePeriod=30 Jan 26 15:46:16 crc kubenswrapper[4713]: I0126 15:46:16.399949 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6" gracePeriod=30 Jan 26 15:46:16 crc kubenswrapper[4713]: I0126 15:46:16.399916 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="northd" containerID="cri-o://feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221" gracePeriod=30 Jan 26 15:46:16 crc kubenswrapper[4713]: I0126 15:46:16.399973 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="nbdb" containerID="cri-o://351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36" gracePeriod=30 Jan 26 15:46:16 crc kubenswrapper[4713]: I0126 15:46:16.400286 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="kube-rbac-proxy-node" containerID="cri-o://6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd" gracePeriod=30 Jan 26 15:46:16 crc kubenswrapper[4713]: I0126 15:46:16.473983 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" containerID="cri-o://ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10" gracePeriod=30 Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.397976 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/3.log" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.400543 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovn-acl-logging/0.log" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.401029 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovn-controller/0.log" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.401462 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.437099 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovnkube-controller/3.log" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.439425 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovn-acl-logging/0.log" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442314 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2drw2_4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/ovn-controller/0.log" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442789 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerID="ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10" exitCode=0 Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442813 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerID="1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c" exitCode=0 Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442820 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerID="351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36" exitCode=0 Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442826 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerID="feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221" exitCode=0 Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442833 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerID="152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6" exitCode=0 Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442840 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerID="6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd" exitCode=0 Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442847 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerID="bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff" exitCode=143 Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442854 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerID="543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0" exitCode=143 Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442911 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerDied","Data":"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442941 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerDied","Data":"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442952 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerDied","Data":"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442961 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerDied","Data":"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442971 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerDied","Data":"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442976 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442991 4713 scope.go:117] "RemoveContainer" containerID="ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.442980 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerDied","Data":"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443145 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443155 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443161 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443166 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443172 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443177 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443182 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443187 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443192 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443200 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerDied","Data":"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443208 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443215 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443220 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443225 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443231 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443235 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443240 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443245 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443250 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443255 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443262 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerDied","Data":"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443270 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443276 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443282 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443287 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443292 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443298 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443302 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443308 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443312 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443318 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443326 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2drw2" event={"ID":"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c","Type":"ContainerDied","Data":"2b66bf81676bede77b67f73ddef6eb873ce0e6fdaf418381db2441f7a2dac300"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443335 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443341 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443346 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443351 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443357 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443364 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443381 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443387 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443395 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.443401 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.446236 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4ld7b_d21f731c-7a63-4c3c-bdc5-9267197741d4/kube-multus/2.log" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.446926 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4ld7b_d21f731c-7a63-4c3c-bdc5-9267197741d4/kube-multus/1.log" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.446973 4713 generic.go:334] "Generic (PLEG): container finished" podID="d21f731c-7a63-4c3c-bdc5-9267197741d4" containerID="c09e4420e3c3da6375408a7e83498526aaae364774050a8fa7364578b9ec8e35" exitCode=2 Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.447012 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4ld7b" event={"ID":"d21f731c-7a63-4c3c-bdc5-9267197741d4","Type":"ContainerDied","Data":"c09e4420e3c3da6375408a7e83498526aaae364774050a8fa7364578b9ec8e35"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.447038 4713 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"81fef6986044de1cc82fda7f41ffadb687ecdbc3047ddd68f2d4f21ee6698e77"} Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.447592 4713 scope.go:117] "RemoveContainer" containerID="c09e4420e3c3da6375408a7e83498526aaae364774050a8fa7364578b9ec8e35" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.475071 4713 scope.go:117] "RemoveContainer" containerID="fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483058 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-cni-bin\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483117 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-env-overrides\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483149 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-ovn\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483170 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-node-log\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483200 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-run-ovn-kubernetes\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483221 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-cni-netd\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483241 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-etc-openvswitch\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483262 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-slash\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483293 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-kubelet\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483317 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-systemd-units\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483338 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovn-node-metrics-cert\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483377 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovnkube-config\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483421 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-openvswitch\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483445 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-log-socket\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483476 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483502 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovnkube-script-lib\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483543 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-var-lib-openvswitch\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483579 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-run-netns\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483601 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-systemd\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483636 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmw7m\" (UniqueName: \"kubernetes.io/projected/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-kube-api-access-xmw7m\") pod \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\" (UID: \"4ba2d551-0768-4bac-9af5-bd6e7e58ce8c\") " Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483902 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.483953 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.484245 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.484272 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.484291 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-node-log" (OuterVolumeSpecName: "node-log") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.484311 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.484331 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.484349 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.484370 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-slash" (OuterVolumeSpecName: "host-slash") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.484388 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.484421 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.484656 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.484680 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.484697 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-log-socket" (OuterVolumeSpecName: "log-socket") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.484716 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.484917 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.486130 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.504924 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.505213 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-kube-api-access-xmw7m" (OuterVolumeSpecName: "kube-api-access-xmw7m") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "kube-api-access-xmw7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.522542 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xcdkj"] Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523253 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f30d0d9-a953-4da6-be6b-32fc986c16ae" containerName="pull" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523278 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f30d0d9-a953-4da6-be6b-32fc986c16ae" containerName="pull" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523295 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovn-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523305 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovn-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523315 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523322 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523329 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovn-acl-logging" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523336 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovn-acl-logging" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523346 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523353 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523361 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523367 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523376 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="kubecfg-setup" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523383 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="kubecfg-setup" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523411 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523419 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523429 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523435 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523443 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="nbdb" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523450 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="nbdb" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523457 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="sbdb" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523463 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="sbdb" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523472 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="kube-rbac-proxy-node" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523477 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="kube-rbac-proxy-node" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523487 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f30d0d9-a953-4da6-be6b-32fc986c16ae" containerName="util" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523492 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f30d0d9-a953-4da6-be6b-32fc986c16ae" containerName="util" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523499 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="northd" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523505 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="northd" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523514 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f30d0d9-a953-4da6-be6b-32fc986c16ae" containerName="extract" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523520 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f30d0d9-a953-4da6-be6b-32fc986c16ae" containerName="extract" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.523529 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523536 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523631 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f30d0d9-a953-4da6-be6b-32fc986c16ae" containerName="extract" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523642 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523661 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523668 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523676 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523683 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="northd" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523691 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="sbdb" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523698 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="kube-rbac-proxy-node" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523705 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="nbdb" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523711 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovn-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523717 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovn-acl-logging" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523883 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.523893 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" containerName="ovnkube-controller" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.522693 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" (UID: "4ba2d551-0768-4bac-9af5-bd6e7e58ce8c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.525447 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.539050 4713 scope.go:117] "RemoveContainer" containerID="1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.585979 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-systemd-units\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586088 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-run-ovn-kubernetes\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586134 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41971b85-d2d0-41bf-b45d-5d923bced496-ovnkube-config\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586170 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-run-systemd\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586201 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-slash\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586224 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-kubelet\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586246 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-cni-netd\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586259 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41971b85-d2d0-41bf-b45d-5d923bced496-env-overrides\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586272 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-var-lib-openvswitch\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586287 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-run-ovn\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586304 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-node-log\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586318 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-log-socket\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586396 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-etc-openvswitch\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586417 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41971b85-d2d0-41bf-b45d-5d923bced496-ovn-node-metrics-cert\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586550 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41971b85-d2d0-41bf-b45d-5d923bced496-ovnkube-script-lib\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586633 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-cni-bin\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586731 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586778 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-run-netns\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586799 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-run-openvswitch\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586848 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dxgf\" (UniqueName: \"kubernetes.io/projected/41971b85-d2d0-41bf-b45d-5d923bced496-kube-api-access-2dxgf\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586898 4713 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586909 4713 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586918 4713 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586927 4713 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586938 4713 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586947 4713 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586956 4713 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586965 4713 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586975 4713 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586983 4713 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.586992 4713 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.587002 4713 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.587010 4713 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.587021 4713 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.587029 4713 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.587038 4713 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.587046 4713 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.587054 4713 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.587065 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmw7m\" (UniqueName: \"kubernetes.io/projected/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-kube-api-access-xmw7m\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.587073 4713 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.588666 4713 scope.go:117] "RemoveContainer" containerID="351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.605129 4713 scope.go:117] "RemoveContainer" containerID="feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.621777 4713 scope.go:117] "RemoveContainer" containerID="152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.641991 4713 scope.go:117] "RemoveContainer" containerID="6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.657299 4713 scope.go:117] "RemoveContainer" containerID="bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688200 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-slash\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688243 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-kubelet\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688263 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41971b85-d2d0-41bf-b45d-5d923bced496-env-overrides\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688279 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-var-lib-openvswitch\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688293 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-run-ovn\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688310 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-cni-netd\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688325 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-node-log\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688340 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-log-socket\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688374 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-etc-openvswitch\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688402 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41971b85-d2d0-41bf-b45d-5d923bced496-ovn-node-metrics-cert\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688418 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41971b85-d2d0-41bf-b45d-5d923bced496-ovnkube-script-lib\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688433 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-cni-bin\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688457 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688481 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-run-netns\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688496 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-run-openvswitch\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688517 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxgf\" (UniqueName: \"kubernetes.io/projected/41971b85-d2d0-41bf-b45d-5d923bced496-kube-api-access-2dxgf\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688554 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-systemd-units\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688574 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-run-ovn-kubernetes\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688592 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-run-systemd\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.688608 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41971b85-d2d0-41bf-b45d-5d923bced496-ovnkube-config\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.689369 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41971b85-d2d0-41bf-b45d-5d923bced496-ovnkube-config\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.689438 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-slash\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.689467 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-kubelet\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.689747 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41971b85-d2d0-41bf-b45d-5d923bced496-env-overrides\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.689779 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-var-lib-openvswitch\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.689802 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-run-ovn\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.689821 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-cni-netd\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.689841 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-node-log\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.689862 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-log-socket\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.689882 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-etc-openvswitch\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.690340 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-run-openvswitch\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.690434 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.690435 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-cni-bin\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.690492 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-systemd-units\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.690515 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-run-netns\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.690580 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-host-run-ovn-kubernetes\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.690615 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41971b85-d2d0-41bf-b45d-5d923bced496-run-systemd\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.690987 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41971b85-d2d0-41bf-b45d-5d923bced496-ovnkube-script-lib\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.696169 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41971b85-d2d0-41bf-b45d-5d923bced496-ovn-node-metrics-cert\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.698511 4713 scope.go:117] "RemoveContainer" containerID="543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.708463 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dxgf\" (UniqueName: \"kubernetes.io/projected/41971b85-d2d0-41bf-b45d-5d923bced496-kube-api-access-2dxgf\") pod \"ovnkube-node-xcdkj\" (UID: \"41971b85-d2d0-41bf-b45d-5d923bced496\") " pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.734330 4713 scope.go:117] "RemoveContainer" containerID="924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.753450 4713 scope.go:117] "RemoveContainer" containerID="ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.754008 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10\": container with ID starting with ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10 not found: ID does not exist" containerID="ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.754061 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10"} err="failed to get container status \"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10\": rpc error: code = NotFound desc = could not find container \"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10\": container with ID starting with ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.754098 4713 scope.go:117] "RemoveContainer" containerID="fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.754567 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\": container with ID starting with fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f not found: ID does not exist" containerID="fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.754596 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f"} err="failed to get container status \"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\": rpc error: code = NotFound desc = could not find container \"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\": container with ID starting with fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.754624 4713 scope.go:117] "RemoveContainer" containerID="1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.754927 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\": container with ID starting with 1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c not found: ID does not exist" containerID="1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.754965 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c"} err="failed to get container status \"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\": rpc error: code = NotFound desc = could not find container \"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\": container with ID starting with 1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.754990 4713 scope.go:117] "RemoveContainer" containerID="351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.755302 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\": container with ID starting with 351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36 not found: ID does not exist" containerID="351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.755347 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36"} err="failed to get container status \"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\": rpc error: code = NotFound desc = could not find container \"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\": container with ID starting with 351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.755393 4713 scope.go:117] "RemoveContainer" containerID="feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.755811 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\": container with ID starting with feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221 not found: ID does not exist" containerID="feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.755846 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221"} err="failed to get container status \"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\": rpc error: code = NotFound desc = could not find container \"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\": container with ID starting with feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.755864 4713 scope.go:117] "RemoveContainer" containerID="152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.756214 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\": container with ID starting with 152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6 not found: ID does not exist" containerID="152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.756241 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6"} err="failed to get container status \"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\": rpc error: code = NotFound desc = could not find container \"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\": container with ID starting with 152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.756261 4713 scope.go:117] "RemoveContainer" containerID="6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.756589 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\": container with ID starting with 6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd not found: ID does not exist" containerID="6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.756617 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd"} err="failed to get container status \"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\": rpc error: code = NotFound desc = could not find container \"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\": container with ID starting with 6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.756634 4713 scope.go:117] "RemoveContainer" containerID="bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.756902 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\": container with ID starting with bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff not found: ID does not exist" containerID="bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.756936 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff"} err="failed to get container status \"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\": rpc error: code = NotFound desc = could not find container \"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\": container with ID starting with bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.756955 4713 scope.go:117] "RemoveContainer" containerID="543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.757296 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\": container with ID starting with 543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0 not found: ID does not exist" containerID="543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.757321 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0"} err="failed to get container status \"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\": rpc error: code = NotFound desc = could not find container \"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\": container with ID starting with 543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.757336 4713 scope.go:117] "RemoveContainer" containerID="924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d" Jan 26 15:46:17 crc kubenswrapper[4713]: E0126 15:46:17.758946 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\": container with ID starting with 924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d not found: ID does not exist" containerID="924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.758976 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d"} err="failed to get container status \"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\": rpc error: code = NotFound desc = could not find container \"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\": container with ID starting with 924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.758993 4713 scope.go:117] "RemoveContainer" containerID="ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.759243 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10"} err="failed to get container status \"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10\": rpc error: code = NotFound desc = could not find container \"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10\": container with ID starting with ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.759270 4713 scope.go:117] "RemoveContainer" containerID="fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.759929 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f"} err="failed to get container status \"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\": rpc error: code = NotFound desc = could not find container \"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\": container with ID starting with fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.759953 4713 scope.go:117] "RemoveContainer" containerID="1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.760302 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c"} err="failed to get container status \"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\": rpc error: code = NotFound desc = could not find container \"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\": container with ID starting with 1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.760324 4713 scope.go:117] "RemoveContainer" containerID="351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.761778 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36"} err="failed to get container status \"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\": rpc error: code = NotFound desc = could not find container \"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\": container with ID starting with 351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.761801 4713 scope.go:117] "RemoveContainer" containerID="feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.762647 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221"} err="failed to get container status \"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\": rpc error: code = NotFound desc = could not find container \"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\": container with ID starting with feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.762681 4713 scope.go:117] "RemoveContainer" containerID="152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.765096 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6"} err="failed to get container status \"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\": rpc error: code = NotFound desc = could not find container \"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\": container with ID starting with 152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.765129 4713 scope.go:117] "RemoveContainer" containerID="6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.765316 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd"} err="failed to get container status \"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\": rpc error: code = NotFound desc = could not find container \"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\": container with ID starting with 6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.765339 4713 scope.go:117] "RemoveContainer" containerID="bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.765552 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff"} err="failed to get container status \"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\": rpc error: code = NotFound desc = could not find container \"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\": container with ID starting with bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.765574 4713 scope.go:117] "RemoveContainer" containerID="543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.765945 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0"} err="failed to get container status \"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\": rpc error: code = NotFound desc = could not find container \"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\": container with ID starting with 543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.765968 4713 scope.go:117] "RemoveContainer" containerID="924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.766260 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d"} err="failed to get container status \"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\": rpc error: code = NotFound desc = could not find container \"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\": container with ID starting with 924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.766281 4713 scope.go:117] "RemoveContainer" containerID="ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.766493 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10"} err="failed to get container status \"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10\": rpc error: code = NotFound desc = could not find container \"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10\": container with ID starting with ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.766513 4713 scope.go:117] "RemoveContainer" containerID="fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.766697 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f"} err="failed to get container status \"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\": rpc error: code = NotFound desc = could not find container \"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\": container with ID starting with fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.766718 4713 scope.go:117] "RemoveContainer" containerID="1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.766912 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c"} err="failed to get container status \"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\": rpc error: code = NotFound desc = could not find container \"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\": container with ID starting with 1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.766933 4713 scope.go:117] "RemoveContainer" containerID="351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.767147 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36"} err="failed to get container status \"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\": rpc error: code = NotFound desc = could not find container \"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\": container with ID starting with 351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.767169 4713 scope.go:117] "RemoveContainer" containerID="feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.767344 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221"} err="failed to get container status \"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\": rpc error: code = NotFound desc = could not find container \"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\": container with ID starting with feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.767370 4713 scope.go:117] "RemoveContainer" containerID="152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.767625 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6"} err="failed to get container status \"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\": rpc error: code = NotFound desc = could not find container \"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\": container with ID starting with 152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.767647 4713 scope.go:117] "RemoveContainer" containerID="6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.768035 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd"} err="failed to get container status \"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\": rpc error: code = NotFound desc = could not find container \"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\": container with ID starting with 6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.768056 4713 scope.go:117] "RemoveContainer" containerID="bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.769608 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff"} err="failed to get container status \"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\": rpc error: code = NotFound desc = could not find container \"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\": container with ID starting with bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.769631 4713 scope.go:117] "RemoveContainer" containerID="543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.769831 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0"} err="failed to get container status \"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\": rpc error: code = NotFound desc = could not find container \"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\": container with ID starting with 543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.769854 4713 scope.go:117] "RemoveContainer" containerID="924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.770051 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d"} err="failed to get container status \"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\": rpc error: code = NotFound desc = could not find container \"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\": container with ID starting with 924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.770072 4713 scope.go:117] "RemoveContainer" containerID="ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.770260 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10"} err="failed to get container status \"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10\": rpc error: code = NotFound desc = could not find container \"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10\": container with ID starting with ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.770279 4713 scope.go:117] "RemoveContainer" containerID="fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.770468 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f"} err="failed to get container status \"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\": rpc error: code = NotFound desc = could not find container \"fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f\": container with ID starting with fc1643849b8d861a19761ffbd09d3dcdd26dba24f4c110eb0009168d3c208e7f not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.770491 4713 scope.go:117] "RemoveContainer" containerID="1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.770725 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c"} err="failed to get container status \"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\": rpc error: code = NotFound desc = could not find container \"1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c\": container with ID starting with 1fc77409177cba90ce117900cc9ce5865436533a2d1fbd84ac8bd63027d06e3c not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.770747 4713 scope.go:117] "RemoveContainer" containerID="351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.770974 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36"} err="failed to get container status \"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\": rpc error: code = NotFound desc = could not find container \"351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36\": container with ID starting with 351efe019440ab7b4e55ee49354996f8ae240e03239a76939294e2ba04383b36 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.770995 4713 scope.go:117] "RemoveContainer" containerID="feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.771218 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221"} err="failed to get container status \"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\": rpc error: code = NotFound desc = could not find container \"feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221\": container with ID starting with feebeca86680d6c71132b9620f0d47407a828a5655dec1bf6a29c250fd0c7221 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.771240 4713 scope.go:117] "RemoveContainer" containerID="152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.771459 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6"} err="failed to get container status \"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\": rpc error: code = NotFound desc = could not find container \"152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6\": container with ID starting with 152f1370a43782a1152c320baf9d965857404e98790d777ce9d5d3b5231f1ae6 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.771482 4713 scope.go:117] "RemoveContainer" containerID="6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.771695 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd"} err="failed to get container status \"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\": rpc error: code = NotFound desc = could not find container \"6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd\": container with ID starting with 6d8fe0ed534ea93b5af7d6e45149927703ac95598105b4163936c89393204bbd not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.771716 4713 scope.go:117] "RemoveContainer" containerID="bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.771886 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff"} err="failed to get container status \"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\": rpc error: code = NotFound desc = could not find container \"bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff\": container with ID starting with bb0eaa5f769c0bc3e98d22c493088243cf44c102f82ae0d568cdaae63ab24aff not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.771907 4713 scope.go:117] "RemoveContainer" containerID="543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.772111 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0"} err="failed to get container status \"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\": rpc error: code = NotFound desc = could not find container \"543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0\": container with ID starting with 543d0ee6816c16a5698f907410b1bbb5506a0f1f6b7593e93ffaaf900a0c3ce0 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.772133 4713 scope.go:117] "RemoveContainer" containerID="924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.772440 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d"} err="failed to get container status \"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\": rpc error: code = NotFound desc = could not find container \"924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d\": container with ID starting with 924fee629d47f9338c2a9fab2c736a42065631285cec0dd5e5942a5d1b394a6d not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.772463 4713 scope.go:117] "RemoveContainer" containerID="ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.772670 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10"} err="failed to get container status \"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10\": rpc error: code = NotFound desc = could not find container \"ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10\": container with ID starting with ffc344f2192b93cab7686ab828b652dc91d70eb07d9303f62a0e820ac214fb10 not found: ID does not exist" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.791367 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2drw2"] Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.799494 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2drw2"] Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.812295 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba2d551-0768-4bac-9af5-bd6e7e58ce8c" path="/var/lib/kubelet/pods/4ba2d551-0768-4bac-9af5-bd6e7e58ce8c/volumes" Jan 26 15:46:17 crc kubenswrapper[4713]: I0126 15:46:17.838250 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:18 crc kubenswrapper[4713]: I0126 15:46:18.453428 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4ld7b_d21f731c-7a63-4c3c-bdc5-9267197741d4/kube-multus/2.log" Jan 26 15:46:18 crc kubenswrapper[4713]: I0126 15:46:18.454004 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4ld7b_d21f731c-7a63-4c3c-bdc5-9267197741d4/kube-multus/1.log" Jan 26 15:46:18 crc kubenswrapper[4713]: I0126 15:46:18.454056 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4ld7b" event={"ID":"d21f731c-7a63-4c3c-bdc5-9267197741d4","Type":"ContainerStarted","Data":"a280607730174754539713cb4f7aa4823f0bab675980163e7f0a9a93dbb4def2"} Jan 26 15:46:18 crc kubenswrapper[4713]: I0126 15:46:18.457329 4713 generic.go:334] "Generic (PLEG): container finished" podID="41971b85-d2d0-41bf-b45d-5d923bced496" containerID="743d27eddf396b36bb16b2f45173b98a398c46b8f7986db3bf032d83407847b8" exitCode=0 Jan 26 15:46:18 crc kubenswrapper[4713]: I0126 15:46:18.457360 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" event={"ID":"41971b85-d2d0-41bf-b45d-5d923bced496","Type":"ContainerDied","Data":"743d27eddf396b36bb16b2f45173b98a398c46b8f7986db3bf032d83407847b8"} Jan 26 15:46:18 crc kubenswrapper[4713]: I0126 15:46:18.457386 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" event={"ID":"41971b85-d2d0-41bf-b45d-5d923bced496","Type":"ContainerStarted","Data":"8a05fa76a7df4507e316e39445e969d3c531edde0f6a415a486223c512012202"} Jan 26 15:46:19 crc kubenswrapper[4713]: I0126 15:46:19.495000 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" event={"ID":"41971b85-d2d0-41bf-b45d-5d923bced496","Type":"ContainerStarted","Data":"98c98aee83ce706c87d9eb99785a060179a2caf647652d7c4b7d5a8c1df52f67"} Jan 26 15:46:19 crc kubenswrapper[4713]: I0126 15:46:19.495223 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" event={"ID":"41971b85-d2d0-41bf-b45d-5d923bced496","Type":"ContainerStarted","Data":"599051c9698371301378dd86253960c3eeb201fb74574f92dbdd187ef1b7e790"} Jan 26 15:46:19 crc kubenswrapper[4713]: I0126 15:46:19.495236 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" event={"ID":"41971b85-d2d0-41bf-b45d-5d923bced496","Type":"ContainerStarted","Data":"416fe1b84d011008c45c60458c03060cb82ab3fc282e032e0221a15316f788ff"} Jan 26 15:46:20 crc kubenswrapper[4713]: I0126 15:46:20.109315 4713 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 15:46:20 crc kubenswrapper[4713]: I0126 15:46:20.505873 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" event={"ID":"41971b85-d2d0-41bf-b45d-5d923bced496","Type":"ContainerStarted","Data":"6aa317aedf24157e8e8fa58caa4d34e2d3b6691ae1338d4b3cdcbd7518b78622"} Jan 26 15:46:20 crc kubenswrapper[4713]: I0126 15:46:20.505942 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" event={"ID":"41971b85-d2d0-41bf-b45d-5d923bced496","Type":"ContainerStarted","Data":"e3a427f8d3c47d83ddf8a1c684ecf189945e2212ba8acde6ce1e40d80917ec3f"} Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.007302 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp"] Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.008243 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.010575 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.012212 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.012529 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-fnn8b" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.070008 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds"] Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.070658 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.072159 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-j24tp" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.073063 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.093966 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4"] Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.094615 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.133027 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58c7e269-8e8b-4ee4-a57e-ab4218256bbb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds\" (UID: \"58c7e269-8e8b-4ee4-a57e-ab4218256bbb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.133077 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72150c7a-70d1-4f39-9649-840dbf9571d2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4\" (UID: \"72150c7a-70d1-4f39-9649-840dbf9571d2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.133108 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/58c7e269-8e8b-4ee4-a57e-ab4218256bbb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds\" (UID: \"58c7e269-8e8b-4ee4-a57e-ab4218256bbb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.133149 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7jn6\" (UniqueName: \"kubernetes.io/projected/913497e5-68bd-48dd-aed5-babd17f47f0e-kube-api-access-k7jn6\") pod \"obo-prometheus-operator-68bc856cb9-rmjvp\" (UID: \"913497e5-68bd-48dd-aed5-babd17f47f0e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.133243 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72150c7a-70d1-4f39-9649-840dbf9571d2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4\" (UID: \"72150c7a-70d1-4f39-9649-840dbf9571d2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.234893 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72150c7a-70d1-4f39-9649-840dbf9571d2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4\" (UID: \"72150c7a-70d1-4f39-9649-840dbf9571d2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.235173 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58c7e269-8e8b-4ee4-a57e-ab4218256bbb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds\" (UID: \"58c7e269-8e8b-4ee4-a57e-ab4218256bbb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.235195 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72150c7a-70d1-4f39-9649-840dbf9571d2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4\" (UID: \"72150c7a-70d1-4f39-9649-840dbf9571d2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.235221 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/58c7e269-8e8b-4ee4-a57e-ab4218256bbb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds\" (UID: \"58c7e269-8e8b-4ee4-a57e-ab4218256bbb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.235256 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7jn6\" (UniqueName: \"kubernetes.io/projected/913497e5-68bd-48dd-aed5-babd17f47f0e-kube-api-access-k7jn6\") pod \"obo-prometheus-operator-68bc856cb9-rmjvp\" (UID: \"913497e5-68bd-48dd-aed5-babd17f47f0e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.235788 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-l79jc"] Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.236520 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.239318 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-b4c49" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.239617 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.241961 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72150c7a-70d1-4f39-9649-840dbf9571d2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4\" (UID: \"72150c7a-70d1-4f39-9649-840dbf9571d2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.241958 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58c7e269-8e8b-4ee4-a57e-ab4218256bbb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds\" (UID: \"58c7e269-8e8b-4ee4-a57e-ab4218256bbb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.253081 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7jn6\" (UniqueName: \"kubernetes.io/projected/913497e5-68bd-48dd-aed5-babd17f47f0e-kube-api-access-k7jn6\") pod \"obo-prometheus-operator-68bc856cb9-rmjvp\" (UID: \"913497e5-68bd-48dd-aed5-babd17f47f0e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.256964 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72150c7a-70d1-4f39-9649-840dbf9571d2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4\" (UID: \"72150c7a-70d1-4f39-9649-840dbf9571d2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.257844 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/58c7e269-8e8b-4ee4-a57e-ab4218256bbb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds\" (UID: \"58c7e269-8e8b-4ee4-a57e-ab4218256bbb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.326572 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.336810 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn474\" (UniqueName: \"kubernetes.io/projected/9b4ece96-60c6-4974-af3e-6a61eebaf729-kube-api-access-jn474\") pod \"observability-operator-59bdc8b94-l79jc\" (UID: \"9b4ece96-60c6-4974-af3e-6a61eebaf729\") " pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.336905 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b4ece96-60c6-4974-af3e-6a61eebaf729-observability-operator-tls\") pod \"observability-operator-59bdc8b94-l79jc\" (UID: \"9b4ece96-60c6-4974-af3e-6a61eebaf729\") " pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.343221 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-77g4l"] Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.344064 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.347538 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-pcfb8" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.356510 4713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmjvp_openshift-operators_913497e5-68bd-48dd-aed5-babd17f47f0e_0(8af92a0c320698f993ad480fb906d7470dad4b43cf90c37cae570e61fb15638a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.356584 4713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmjvp_openshift-operators_913497e5-68bd-48dd-aed5-babd17f47f0e_0(8af92a0c320698f993ad480fb906d7470dad4b43cf90c37cae570e61fb15638a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.356627 4713 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmjvp_openshift-operators_913497e5-68bd-48dd-aed5-babd17f47f0e_0(8af92a0c320698f993ad480fb906d7470dad4b43cf90c37cae570e61fb15638a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.356684 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-rmjvp_openshift-operators(913497e5-68bd-48dd-aed5-babd17f47f0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-rmjvp_openshift-operators(913497e5-68bd-48dd-aed5-babd17f47f0e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmjvp_openshift-operators_913497e5-68bd-48dd-aed5-babd17f47f0e_0(8af92a0c320698f993ad480fb906d7470dad4b43cf90c37cae570e61fb15638a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" podUID="913497e5-68bd-48dd-aed5-babd17f47f0e" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.383227 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.404190 4713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds_openshift-operators_58c7e269-8e8b-4ee4-a57e-ab4218256bbb_0(2aaa4f9f2f39b97ed7f0c8d5090e23fd3621fc8726cbee2216582a8e8d4f1a09): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.404287 4713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds_openshift-operators_58c7e269-8e8b-4ee4-a57e-ab4218256bbb_0(2aaa4f9f2f39b97ed7f0c8d5090e23fd3621fc8726cbee2216582a8e8d4f1a09): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.404309 4713 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds_openshift-operators_58c7e269-8e8b-4ee4-a57e-ab4218256bbb_0(2aaa4f9f2f39b97ed7f0c8d5090e23fd3621fc8726cbee2216582a8e8d4f1a09): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.404359 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds_openshift-operators(58c7e269-8e8b-4ee4-a57e-ab4218256bbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds_openshift-operators(58c7e269-8e8b-4ee4-a57e-ab4218256bbb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds_openshift-operators_58c7e269-8e8b-4ee4-a57e-ab4218256bbb_0(2aaa4f9f2f39b97ed7f0c8d5090e23fd3621fc8726cbee2216582a8e8d4f1a09): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" podUID="58c7e269-8e8b-4ee4-a57e-ab4218256bbb" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.407752 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.430911 4713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4_openshift-operators_72150c7a-70d1-4f39-9649-840dbf9571d2_0(960cfa6026f8068a46933ba23d5e2cfedc740407fea438c78551b846cae86a6a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.430982 4713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4_openshift-operators_72150c7a-70d1-4f39-9649-840dbf9571d2_0(960cfa6026f8068a46933ba23d5e2cfedc740407fea438c78551b846cae86a6a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.431006 4713 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4_openshift-operators_72150c7a-70d1-4f39-9649-840dbf9571d2_0(960cfa6026f8068a46933ba23d5e2cfedc740407fea438c78551b846cae86a6a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.431063 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4_openshift-operators(72150c7a-70d1-4f39-9649-840dbf9571d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4_openshift-operators(72150c7a-70d1-4f39-9649-840dbf9571d2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4_openshift-operators_72150c7a-70d1-4f39-9649-840dbf9571d2_0(960cfa6026f8068a46933ba23d5e2cfedc740407fea438c78551b846cae86a6a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" podUID="72150c7a-70d1-4f39-9649-840dbf9571d2" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.438281 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b4ece96-60c6-4974-af3e-6a61eebaf729-observability-operator-tls\") pod \"observability-operator-59bdc8b94-l79jc\" (UID: \"9b4ece96-60c6-4974-af3e-6a61eebaf729\") " pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.438365 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn474\" (UniqueName: \"kubernetes.io/projected/9b4ece96-60c6-4974-af3e-6a61eebaf729-kube-api-access-jn474\") pod \"observability-operator-59bdc8b94-l79jc\" (UID: \"9b4ece96-60c6-4974-af3e-6a61eebaf729\") " pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.438403 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm95g\" (UniqueName: \"kubernetes.io/projected/ebe35fcf-702c-42da-8eba-33bb585c50db-kube-api-access-pm95g\") pod \"perses-operator-5bf474d74f-77g4l\" (UID: \"ebe35fcf-702c-42da-8eba-33bb585c50db\") " pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.438428 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ebe35fcf-702c-42da-8eba-33bb585c50db-openshift-service-ca\") pod \"perses-operator-5bf474d74f-77g4l\" (UID: \"ebe35fcf-702c-42da-8eba-33bb585c50db\") " pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.442054 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b4ece96-60c6-4974-af3e-6a61eebaf729-observability-operator-tls\") pod \"observability-operator-59bdc8b94-l79jc\" (UID: \"9b4ece96-60c6-4974-af3e-6a61eebaf729\") " pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.474890 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn474\" (UniqueName: \"kubernetes.io/projected/9b4ece96-60c6-4974-af3e-6a61eebaf729-kube-api-access-jn474\") pod \"observability-operator-59bdc8b94-l79jc\" (UID: \"9b4ece96-60c6-4974-af3e-6a61eebaf729\") " pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.519254 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" event={"ID":"41971b85-d2d0-41bf-b45d-5d923bced496","Type":"ContainerStarted","Data":"8740cde855938ac0043fc2dad057d6e424fe58ccdc684547ffd1176b2e14c437"} Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.540073 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ebe35fcf-702c-42da-8eba-33bb585c50db-openshift-service-ca\") pod \"perses-operator-5bf474d74f-77g4l\" (UID: \"ebe35fcf-702c-42da-8eba-33bb585c50db\") " pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.540264 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm95g\" (UniqueName: \"kubernetes.io/projected/ebe35fcf-702c-42da-8eba-33bb585c50db-kube-api-access-pm95g\") pod \"perses-operator-5bf474d74f-77g4l\" (UID: \"ebe35fcf-702c-42da-8eba-33bb585c50db\") " pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.541004 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ebe35fcf-702c-42da-8eba-33bb585c50db-openshift-service-ca\") pod \"perses-operator-5bf474d74f-77g4l\" (UID: \"ebe35fcf-702c-42da-8eba-33bb585c50db\") " pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.566941 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm95g\" (UniqueName: \"kubernetes.io/projected/ebe35fcf-702c-42da-8eba-33bb585c50db-kube-api-access-pm95g\") pod \"perses-operator-5bf474d74f-77g4l\" (UID: \"ebe35fcf-702c-42da-8eba-33bb585c50db\") " pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.583664 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.604085 4713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l79jc_openshift-operators_9b4ece96-60c6-4974-af3e-6a61eebaf729_0(b89a3992d496d96605dbf9e217371651709efd0bb85d1763e972149bd5840b53): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.604160 4713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l79jc_openshift-operators_9b4ece96-60c6-4974-af3e-6a61eebaf729_0(b89a3992d496d96605dbf9e217371651709efd0bb85d1763e972149bd5840b53): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.604186 4713 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l79jc_openshift-operators_9b4ece96-60c6-4974-af3e-6a61eebaf729_0(b89a3992d496d96605dbf9e217371651709efd0bb85d1763e972149bd5840b53): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.604235 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-l79jc_openshift-operators(9b4ece96-60c6-4974-af3e-6a61eebaf729)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-l79jc_openshift-operators(9b4ece96-60c6-4974-af3e-6a61eebaf729)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l79jc_openshift-operators_9b4ece96-60c6-4974-af3e-6a61eebaf729_0(b89a3992d496d96605dbf9e217371651709efd0bb85d1763e972149bd5840b53): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" podUID="9b4ece96-60c6-4974-af3e-6a61eebaf729" Jan 26 15:46:21 crc kubenswrapper[4713]: I0126 15:46:21.684744 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.703151 4713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-77g4l_openshift-operators_ebe35fcf-702c-42da-8eba-33bb585c50db_0(08d8b622164f76e8b8fe1e5d81b2da86f64523d5ec15f4acd80fb4d8b5032089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.703254 4713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-77g4l_openshift-operators_ebe35fcf-702c-42da-8eba-33bb585c50db_0(08d8b622164f76e8b8fe1e5d81b2da86f64523d5ec15f4acd80fb4d8b5032089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.703285 4713 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-77g4l_openshift-operators_ebe35fcf-702c-42da-8eba-33bb585c50db_0(08d8b622164f76e8b8fe1e5d81b2da86f64523d5ec15f4acd80fb4d8b5032089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:21 crc kubenswrapper[4713]: E0126 15:46:21.703352 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-77g4l_openshift-operators(ebe35fcf-702c-42da-8eba-33bb585c50db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-77g4l_openshift-operators(ebe35fcf-702c-42da-8eba-33bb585c50db)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-77g4l_openshift-operators_ebe35fcf-702c-42da-8eba-33bb585c50db_0(08d8b622164f76e8b8fe1e5d81b2da86f64523d5ec15f4acd80fb4d8b5032089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" podUID="ebe35fcf-702c-42da-8eba-33bb585c50db" Jan 26 15:46:24 crc kubenswrapper[4713]: I0126 15:46:24.554741 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" event={"ID":"41971b85-d2d0-41bf-b45d-5d923bced496","Type":"ContainerStarted","Data":"da4592c2391ac8d8d8d6889ac2f77a42727654615f4e86cffe6417b366fa9521"} Jan 26 15:46:27 crc kubenswrapper[4713]: I0126 15:46:27.575803 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" event={"ID":"41971b85-d2d0-41bf-b45d-5d923bced496","Type":"ContainerStarted","Data":"a6ecafd6ef28003f10b999ae69acb3db78ee30c977d8360b8727a2a5b57437e3"} Jan 26 15:46:27 crc kubenswrapper[4713]: I0126 15:46:27.576480 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:27 crc kubenswrapper[4713]: I0126 15:46:27.576498 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:27 crc kubenswrapper[4713]: I0126 15:46:27.576513 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:27 crc kubenswrapper[4713]: I0126 15:46:27.618408 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:27 crc kubenswrapper[4713]: I0126 15:46:27.621515 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:27 crc kubenswrapper[4713]: I0126 15:46:27.631066 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" podStartSLOduration=10.631027593 podStartE2EDuration="10.631027593s" podCreationTimestamp="2026-01-26 15:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:46:27.626057526 +0000 UTC m=+762.763074781" watchObservedRunningTime="2026-01-26 15:46:27.631027593 +0000 UTC m=+762.768044828" Jan 26 15:46:28 crc kubenswrapper[4713]: I0126 15:46:28.590611 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp"] Jan 26 15:46:28 crc kubenswrapper[4713]: I0126 15:46:28.590727 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" Jan 26 15:46:28 crc kubenswrapper[4713]: I0126 15:46:28.591113 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" Jan 26 15:46:28 crc kubenswrapper[4713]: I0126 15:46:28.596787 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds"] Jan 26 15:46:28 crc kubenswrapper[4713]: I0126 15:46:28.596908 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:28 crc kubenswrapper[4713]: I0126 15:46:28.597314 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:28 crc kubenswrapper[4713]: I0126 15:46:28.599971 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-77g4l"] Jan 26 15:46:28 crc kubenswrapper[4713]: I0126 15:46:28.600092 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:28 crc kubenswrapper[4713]: I0126 15:46:28.600488 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:28 crc kubenswrapper[4713]: I0126 15:46:28.621169 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4"] Jan 26 15:46:28 crc kubenswrapper[4713]: I0126 15:46:28.621273 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:28 crc kubenswrapper[4713]: I0126 15:46:28.621703 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:28 crc kubenswrapper[4713]: I0126 15:46:28.656026 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-l79jc"] Jan 26 15:46:28 crc kubenswrapper[4713]: I0126 15:46:28.656205 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:28 crc kubenswrapper[4713]: I0126 15:46:28.656682 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.662343 4713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmjvp_openshift-operators_913497e5-68bd-48dd-aed5-babd17f47f0e_0(7b263c5e85278583e14a525e6636c507d9670e5d195149debed375e6520bfa31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.662448 4713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmjvp_openshift-operators_913497e5-68bd-48dd-aed5-babd17f47f0e_0(7b263c5e85278583e14a525e6636c507d9670e5d195149debed375e6520bfa31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.662473 4713 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmjvp_openshift-operators_913497e5-68bd-48dd-aed5-babd17f47f0e_0(7b263c5e85278583e14a525e6636c507d9670e5d195149debed375e6520bfa31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.662522 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-rmjvp_openshift-operators(913497e5-68bd-48dd-aed5-babd17f47f0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-rmjvp_openshift-operators(913497e5-68bd-48dd-aed5-babd17f47f0e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmjvp_openshift-operators_913497e5-68bd-48dd-aed5-babd17f47f0e_0(7b263c5e85278583e14a525e6636c507d9670e5d195149debed375e6520bfa31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" podUID="913497e5-68bd-48dd-aed5-babd17f47f0e" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.687479 4713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds_openshift-operators_58c7e269-8e8b-4ee4-a57e-ab4218256bbb_0(74ea381423e5d86540897d2acd2d3511a878e5e8ada5e884e198f076eba4b2f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.687540 4713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds_openshift-operators_58c7e269-8e8b-4ee4-a57e-ab4218256bbb_0(74ea381423e5d86540897d2acd2d3511a878e5e8ada5e884e198f076eba4b2f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.687568 4713 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds_openshift-operators_58c7e269-8e8b-4ee4-a57e-ab4218256bbb_0(74ea381423e5d86540897d2acd2d3511a878e5e8ada5e884e198f076eba4b2f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.687617 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds_openshift-operators(58c7e269-8e8b-4ee4-a57e-ab4218256bbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds_openshift-operators(58c7e269-8e8b-4ee4-a57e-ab4218256bbb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds_openshift-operators_58c7e269-8e8b-4ee4-a57e-ab4218256bbb_0(74ea381423e5d86540897d2acd2d3511a878e5e8ada5e884e198f076eba4b2f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" podUID="58c7e269-8e8b-4ee4-a57e-ab4218256bbb" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.695184 4713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-77g4l_openshift-operators_ebe35fcf-702c-42da-8eba-33bb585c50db_0(a91988deca5986f1d3554830de86f0a18453e6577a755cacfa3edf6f423c4076): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.695251 4713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-77g4l_openshift-operators_ebe35fcf-702c-42da-8eba-33bb585c50db_0(a91988deca5986f1d3554830de86f0a18453e6577a755cacfa3edf6f423c4076): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.695272 4713 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-77g4l_openshift-operators_ebe35fcf-702c-42da-8eba-33bb585c50db_0(a91988deca5986f1d3554830de86f0a18453e6577a755cacfa3edf6f423c4076): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.695312 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-77g4l_openshift-operators(ebe35fcf-702c-42da-8eba-33bb585c50db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-77g4l_openshift-operators(ebe35fcf-702c-42da-8eba-33bb585c50db)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-77g4l_openshift-operators_ebe35fcf-702c-42da-8eba-33bb585c50db_0(a91988deca5986f1d3554830de86f0a18453e6577a755cacfa3edf6f423c4076): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" podUID="ebe35fcf-702c-42da-8eba-33bb585c50db" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.717232 4713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4_openshift-operators_72150c7a-70d1-4f39-9649-840dbf9571d2_0(55c267ece15639f1599e15b9ac0e58a78b44cb9f95b777bed8e0afa242d43596): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.717322 4713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4_openshift-operators_72150c7a-70d1-4f39-9649-840dbf9571d2_0(55c267ece15639f1599e15b9ac0e58a78b44cb9f95b777bed8e0afa242d43596): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.717351 4713 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4_openshift-operators_72150c7a-70d1-4f39-9649-840dbf9571d2_0(55c267ece15639f1599e15b9ac0e58a78b44cb9f95b777bed8e0afa242d43596): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.717425 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4_openshift-operators(72150c7a-70d1-4f39-9649-840dbf9571d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4_openshift-operators(72150c7a-70d1-4f39-9649-840dbf9571d2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4_openshift-operators_72150c7a-70d1-4f39-9649-840dbf9571d2_0(55c267ece15639f1599e15b9ac0e58a78b44cb9f95b777bed8e0afa242d43596): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" podUID="72150c7a-70d1-4f39-9649-840dbf9571d2" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.734695 4713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l79jc_openshift-operators_9b4ece96-60c6-4974-af3e-6a61eebaf729_0(c4c329cf32f84ae7f8b772e97925e68793f61925145be1c7458b9d661e2851bd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.734754 4713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l79jc_openshift-operators_9b4ece96-60c6-4974-af3e-6a61eebaf729_0(c4c329cf32f84ae7f8b772e97925e68793f61925145be1c7458b9d661e2851bd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.734773 4713 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l79jc_openshift-operators_9b4ece96-60c6-4974-af3e-6a61eebaf729_0(c4c329cf32f84ae7f8b772e97925e68793f61925145be1c7458b9d661e2851bd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:28 crc kubenswrapper[4713]: E0126 15:46:28.734807 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-l79jc_openshift-operators(9b4ece96-60c6-4974-af3e-6a61eebaf729)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-l79jc_openshift-operators(9b4ece96-60c6-4974-af3e-6a61eebaf729)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l79jc_openshift-operators_9b4ece96-60c6-4974-af3e-6a61eebaf729_0(c4c329cf32f84ae7f8b772e97925e68793f61925145be1c7458b9d661e2851bd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" podUID="9b4ece96-60c6-4974-af3e-6a61eebaf729" Jan 26 15:46:33 crc kubenswrapper[4713]: I0126 15:46:33.301470 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:46:33 crc kubenswrapper[4713]: I0126 15:46:33.301923 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:46:40 crc kubenswrapper[4713]: I0126 15:46:40.803494 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:40 crc kubenswrapper[4713]: I0126 15:46:40.805854 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:41 crc kubenswrapper[4713]: I0126 15:46:41.235042 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-77g4l"] Jan 26 15:46:41 crc kubenswrapper[4713]: W0126 15:46:41.241551 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebe35fcf_702c_42da_8eba_33bb585c50db.slice/crio-b400746e4fd72103249a373bf440528c4c3501709644c252a35b2025c519bbeb WatchSource:0}: Error finding container b400746e4fd72103249a373bf440528c4c3501709644c252a35b2025c519bbeb: Status 404 returned error can't find the container with id b400746e4fd72103249a373bf440528c4c3501709644c252a35b2025c519bbeb Jan 26 15:46:41 crc kubenswrapper[4713]: I0126 15:46:41.677712 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" event={"ID":"ebe35fcf-702c-42da-8eba-33bb585c50db","Type":"ContainerStarted","Data":"b400746e4fd72103249a373bf440528c4c3501709644c252a35b2025c519bbeb"} Jan 26 15:46:41 crc kubenswrapper[4713]: I0126 15:46:41.804666 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:41 crc kubenswrapper[4713]: I0126 15:46:41.805602 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:42 crc kubenswrapper[4713]: I0126 15:46:42.257986 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-l79jc"] Jan 26 15:46:42 crc kubenswrapper[4713]: W0126 15:46:42.267593 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b4ece96_60c6_4974_af3e_6a61eebaf729.slice/crio-90a8f97e6caf46c70676d2f71e3db92b29b35cd9b8616502f18a948da6a749d8 WatchSource:0}: Error finding container 90a8f97e6caf46c70676d2f71e3db92b29b35cd9b8616502f18a948da6a749d8: Status 404 returned error can't find the container with id 90a8f97e6caf46c70676d2f71e3db92b29b35cd9b8616502f18a948da6a749d8 Jan 26 15:46:42 crc kubenswrapper[4713]: I0126 15:46:42.708784 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" event={"ID":"9b4ece96-60c6-4974-af3e-6a61eebaf729","Type":"ContainerStarted","Data":"90a8f97e6caf46c70676d2f71e3db92b29b35cd9b8616502f18a948da6a749d8"} Jan 26 15:46:42 crc kubenswrapper[4713]: I0126 15:46:42.803123 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" Jan 26 15:46:42 crc kubenswrapper[4713]: I0126 15:46:42.803648 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" Jan 26 15:46:43 crc kubenswrapper[4713]: I0126 15:46:43.016114 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp"] Jan 26 15:46:43 crc kubenswrapper[4713]: W0126 15:46:43.023280 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod913497e5_68bd_48dd_aed5_babd17f47f0e.slice/crio-70e291d0f793019c548a2d8ccef9903d5d74d7e6caccffd0eab5845967a34a7a WatchSource:0}: Error finding container 70e291d0f793019c548a2d8ccef9903d5d74d7e6caccffd0eab5845967a34a7a: Status 404 returned error can't find the container with id 70e291d0f793019c548a2d8ccef9903d5d74d7e6caccffd0eab5845967a34a7a Jan 26 15:46:43 crc kubenswrapper[4713]: I0126 15:46:43.716715 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" event={"ID":"913497e5-68bd-48dd-aed5-babd17f47f0e","Type":"ContainerStarted","Data":"70e291d0f793019c548a2d8ccef9903d5d74d7e6caccffd0eab5845967a34a7a"} Jan 26 15:46:43 crc kubenswrapper[4713]: I0126 15:46:43.805668 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:43 crc kubenswrapper[4713]: I0126 15:46:43.805812 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:43 crc kubenswrapper[4713]: I0126 15:46:43.806139 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" Jan 26 15:46:43 crc kubenswrapper[4713]: I0126 15:46:43.806399 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" Jan 26 15:46:44 crc kubenswrapper[4713]: I0126 15:46:44.168807 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4"] Jan 26 15:46:44 crc kubenswrapper[4713]: I0126 15:46:44.390247 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds"] Jan 26 15:46:45 crc kubenswrapper[4713]: W0126 15:46:45.330505 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72150c7a_70d1_4f39_9649_840dbf9571d2.slice/crio-ca0d961932f7c9095107fa297495e32ae5fbd950847686c53502cd47c0312233 WatchSource:0}: Error finding container ca0d961932f7c9095107fa297495e32ae5fbd950847686c53502cd47c0312233: Status 404 returned error can't find the container with id ca0d961932f7c9095107fa297495e32ae5fbd950847686c53502cd47c0312233 Jan 26 15:46:45 crc kubenswrapper[4713]: I0126 15:46:45.732050 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" event={"ID":"58c7e269-8e8b-4ee4-a57e-ab4218256bbb","Type":"ContainerStarted","Data":"e42641b8cc94df06bfbefbe41393311fdc6df8d8cfa372df47046505f776375b"} Jan 26 15:46:45 crc kubenswrapper[4713]: I0126 15:46:45.733646 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" event={"ID":"72150c7a-70d1-4f39-9649-840dbf9571d2","Type":"ContainerStarted","Data":"ca0d961932f7c9095107fa297495e32ae5fbd950847686c53502cd47c0312233"} Jan 26 15:46:46 crc kubenswrapper[4713]: I0126 15:46:46.112725 4713 scope.go:117] "RemoveContainer" containerID="81fef6986044de1cc82fda7f41ffadb687ecdbc3047ddd68f2d4f21ee6698e77" Jan 26 15:46:47 crc kubenswrapper[4713]: I0126 15:46:47.888070 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xcdkj" Jan 26 15:46:50 crc kubenswrapper[4713]: I0126 15:46:50.764198 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4ld7b_d21f731c-7a63-4c3c-bdc5-9267197741d4/kube-multus/2.log" Jan 26 15:46:52 crc kubenswrapper[4713]: I0126 15:46:52.780671 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" event={"ID":"913497e5-68bd-48dd-aed5-babd17f47f0e","Type":"ContainerStarted","Data":"5075618fe1f625a6e190548fa8f025e5b3813d29103a4de0ad88f80ec2929a3e"} Jan 26 15:46:52 crc kubenswrapper[4713]: I0126 15:46:52.783579 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" event={"ID":"9b4ece96-60c6-4974-af3e-6a61eebaf729","Type":"ContainerStarted","Data":"6e5f68ec8c7c71e814ca8c797fb558ea99f2d604641bc69f189ebaf509d08451"} Jan 26 15:46:52 crc kubenswrapper[4713]: I0126 15:46:52.784467 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:52 crc kubenswrapper[4713]: I0126 15:46:52.784905 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" event={"ID":"58c7e269-8e8b-4ee4-a57e-ab4218256bbb","Type":"ContainerStarted","Data":"a407562e913c30266e722f11aab2094d4ff1f6472b703ed18decd165195f0715"} Jan 26 15:46:52 crc kubenswrapper[4713]: I0126 15:46:52.787120 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" event={"ID":"ebe35fcf-702c-42da-8eba-33bb585c50db","Type":"ContainerStarted","Data":"2fc66f333a9ff5c3e28d25f0dcda8ebcf9ae9909f739d976459cee6abb72d24f"} Jan 26 15:46:52 crc kubenswrapper[4713]: I0126 15:46:52.787233 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:46:52 crc kubenswrapper[4713]: I0126 15:46:52.788976 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" Jan 26 15:46:52 crc kubenswrapper[4713]: I0126 15:46:52.789591 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" event={"ID":"72150c7a-70d1-4f39-9649-840dbf9571d2","Type":"ContainerStarted","Data":"56a941730faee578e30207700de082c9b22d9549bfc0a12244dfd0c52dbc364e"} Jan 26 15:46:52 crc kubenswrapper[4713]: I0126 15:46:52.803974 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmjvp" podStartSLOduration=23.657588919 podStartE2EDuration="32.803946007s" podCreationTimestamp="2026-01-26 15:46:20 +0000 UTC" firstStartedPulling="2026-01-26 15:46:43.02644648 +0000 UTC m=+778.163463715" lastFinishedPulling="2026-01-26 15:46:52.172803568 +0000 UTC m=+787.309820803" observedRunningTime="2026-01-26 15:46:52.801077873 +0000 UTC m=+787.938095168" watchObservedRunningTime="2026-01-26 15:46:52.803946007 +0000 UTC m=+787.940963272" Jan 26 15:46:52 crc kubenswrapper[4713]: I0126 15:46:52.829997 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4" podStartSLOduration=24.981967179 podStartE2EDuration="31.829979662s" podCreationTimestamp="2026-01-26 15:46:21 +0000 UTC" firstStartedPulling="2026-01-26 15:46:45.33412999 +0000 UTC m=+780.471147335" lastFinishedPulling="2026-01-26 15:46:52.182142593 +0000 UTC m=+787.319159818" observedRunningTime="2026-01-26 15:46:52.827759286 +0000 UTC m=+787.964776531" watchObservedRunningTime="2026-01-26 15:46:52.829979662 +0000 UTC m=+787.966996897" Jan 26 15:46:52 crc kubenswrapper[4713]: I0126 15:46:52.879595 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-l79jc" podStartSLOduration=22.886723506 podStartE2EDuration="31.8795737s" podCreationTimestamp="2026-01-26 15:46:21 +0000 UTC" firstStartedPulling="2026-01-26 15:46:42.269847231 +0000 UTC m=+777.406864506" lastFinishedPulling="2026-01-26 15:46:51.262697455 +0000 UTC m=+786.399714700" observedRunningTime="2026-01-26 15:46:52.859926492 +0000 UTC m=+787.996943727" watchObservedRunningTime="2026-01-26 15:46:52.8795737 +0000 UTC m=+788.016590935" Jan 26 15:46:52 crc kubenswrapper[4713]: I0126 15:46:52.881838 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" podStartSLOduration=21.871233315 podStartE2EDuration="31.881829756s" podCreationTimestamp="2026-01-26 15:46:21 +0000 UTC" firstStartedPulling="2026-01-26 15:46:41.244794109 +0000 UTC m=+776.381811344" lastFinishedPulling="2026-01-26 15:46:51.25539055 +0000 UTC m=+786.392407785" observedRunningTime="2026-01-26 15:46:52.880282471 +0000 UTC m=+788.017299716" watchObservedRunningTime="2026-01-26 15:46:52.881829756 +0000 UTC m=+788.018846991" Jan 26 15:46:52 crc kubenswrapper[4713]: I0126 15:46:52.908206 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds" podStartSLOduration=25.066260828 podStartE2EDuration="31.908187682s" podCreationTimestamp="2026-01-26 15:46:21 +0000 UTC" firstStartedPulling="2026-01-26 15:46:45.331042079 +0000 UTC m=+780.468059314" lastFinishedPulling="2026-01-26 15:46:52.172968923 +0000 UTC m=+787.309986168" observedRunningTime="2026-01-26 15:46:52.904268716 +0000 UTC m=+788.041285951" watchObservedRunningTime="2026-01-26 15:46:52.908187682 +0000 UTC m=+788.045204917" Jan 26 15:46:59 crc kubenswrapper[4713]: I0126 15:46:59.929848 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-72ppc"] Jan 26 15:46:59 crc kubenswrapper[4713]: I0126 15:46:59.930994 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-72ppc" Jan 26 15:46:59 crc kubenswrapper[4713]: I0126 15:46:59.935261 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 15:46:59 crc kubenswrapper[4713]: I0126 15:46:59.935562 4713 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-qq22r" Jan 26 15:46:59 crc kubenswrapper[4713]: I0126 15:46:59.935728 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 15:46:59 crc kubenswrapper[4713]: I0126 15:46:59.938582 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l5zh6"] Jan 26 15:46:59 crc kubenswrapper[4713]: I0126 15:46:59.939265 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5zh6" Jan 26 15:46:59 crc kubenswrapper[4713]: I0126 15:46:59.943583 4713 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-d5mnm" Jan 26 15:46:59 crc kubenswrapper[4713]: I0126 15:46:59.953116 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-72ppc"] Jan 26 15:46:59 crc kubenswrapper[4713]: I0126 15:46:59.963523 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l5zh6"] Jan 26 15:46:59 crc kubenswrapper[4713]: I0126 15:46:59.967156 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xdxtz"] Jan 26 15:46:59 crc kubenswrapper[4713]: I0126 15:46:59.967998 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-xdxtz" Jan 26 15:46:59 crc kubenswrapper[4713]: I0126 15:46:59.975294 4713 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-ptmhw" Jan 26 15:46:59 crc kubenswrapper[4713]: I0126 15:46:59.996030 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xdxtz"] Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.025259 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl26l\" (UniqueName: \"kubernetes.io/projected/e4913aa4-c0fe-4d3d-a5c3-64efb5c40291-kube-api-access-xl26l\") pod \"cert-manager-858654f9db-72ppc\" (UID: \"e4913aa4-c0fe-4d3d-a5c3-64efb5c40291\") " pod="cert-manager/cert-manager-858654f9db-72ppc" Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.126184 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsft7\" (UniqueName: \"kubernetes.io/projected/e155e55f-092c-426f-9667-fa1bf707ee5b-kube-api-access-nsft7\") pod \"cert-manager-cainjector-cf98fcc89-l5zh6\" (UID: \"e155e55f-092c-426f-9667-fa1bf707ee5b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5zh6" Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.126249 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl26l\" (UniqueName: \"kubernetes.io/projected/e4913aa4-c0fe-4d3d-a5c3-64efb5c40291-kube-api-access-xl26l\") pod \"cert-manager-858654f9db-72ppc\" (UID: \"e4913aa4-c0fe-4d3d-a5c3-64efb5c40291\") " pod="cert-manager/cert-manager-858654f9db-72ppc" Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.126323 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsjmm\" (UniqueName: \"kubernetes.io/projected/27379ef9-6846-4d31-a33b-f1c6baaac6b3-kube-api-access-fsjmm\") pod \"cert-manager-webhook-687f57d79b-xdxtz\" (UID: \"27379ef9-6846-4d31-a33b-f1c6baaac6b3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xdxtz" Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.145749 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl26l\" (UniqueName: \"kubernetes.io/projected/e4913aa4-c0fe-4d3d-a5c3-64efb5c40291-kube-api-access-xl26l\") pod \"cert-manager-858654f9db-72ppc\" (UID: \"e4913aa4-c0fe-4d3d-a5c3-64efb5c40291\") " pod="cert-manager/cert-manager-858654f9db-72ppc" Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.227017 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsft7\" (UniqueName: \"kubernetes.io/projected/e155e55f-092c-426f-9667-fa1bf707ee5b-kube-api-access-nsft7\") pod \"cert-manager-cainjector-cf98fcc89-l5zh6\" (UID: \"e155e55f-092c-426f-9667-fa1bf707ee5b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5zh6" Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.227148 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsjmm\" (UniqueName: \"kubernetes.io/projected/27379ef9-6846-4d31-a33b-f1c6baaac6b3-kube-api-access-fsjmm\") pod \"cert-manager-webhook-687f57d79b-xdxtz\" (UID: \"27379ef9-6846-4d31-a33b-f1c6baaac6b3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xdxtz" Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.252588 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsjmm\" (UniqueName: \"kubernetes.io/projected/27379ef9-6846-4d31-a33b-f1c6baaac6b3-kube-api-access-fsjmm\") pod \"cert-manager-webhook-687f57d79b-xdxtz\" (UID: \"27379ef9-6846-4d31-a33b-f1c6baaac6b3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xdxtz" Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.254033 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsft7\" (UniqueName: \"kubernetes.io/projected/e155e55f-092c-426f-9667-fa1bf707ee5b-kube-api-access-nsft7\") pod \"cert-manager-cainjector-cf98fcc89-l5zh6\" (UID: \"e155e55f-092c-426f-9667-fa1bf707ee5b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5zh6" Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.264498 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-72ppc" Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.284529 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5zh6" Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.294187 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-xdxtz" Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.726511 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-72ppc"] Jan 26 15:47:00 crc kubenswrapper[4713]: W0126 15:47:00.737170 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4913aa4_c0fe_4d3d_a5c3_64efb5c40291.slice/crio-8f872bc4a61453437879b25edbc6e77e57ed0e380b5aa0a7f987235cc818195d WatchSource:0}: Error finding container 8f872bc4a61453437879b25edbc6e77e57ed0e380b5aa0a7f987235cc818195d: Status 404 returned error can't find the container with id 8f872bc4a61453437879b25edbc6e77e57ed0e380b5aa0a7f987235cc818195d Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.823305 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l5zh6"] Jan 26 15:47:00 crc kubenswrapper[4713]: W0126 15:47:00.824970 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode155e55f_092c_426f_9667_fa1bf707ee5b.slice/crio-be5cc4f0c68db846550f668db5923ba62c9375bb29ef6b7b68216c08e8a067ef WatchSource:0}: Error finding container be5cc4f0c68db846550f668db5923ba62c9375bb29ef6b7b68216c08e8a067ef: Status 404 returned error can't find the container with id be5cc4f0c68db846550f668db5923ba62c9375bb29ef6b7b68216c08e8a067ef Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.829867 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xdxtz"] Jan 26 15:47:00 crc kubenswrapper[4713]: W0126 15:47:00.830224 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27379ef9_6846_4d31_a33b_f1c6baaac6b3.slice/crio-d3f31dd3bfbc122390b5bd92c9ece221b5930239cb8e6d2531d797a6e25e1d21 WatchSource:0}: Error finding container d3f31dd3bfbc122390b5bd92c9ece221b5930239cb8e6d2531d797a6e25e1d21: Status 404 returned error can't find the container with id d3f31dd3bfbc122390b5bd92c9ece221b5930239cb8e6d2531d797a6e25e1d21 Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.850563 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5zh6" event={"ID":"e155e55f-092c-426f-9667-fa1bf707ee5b","Type":"ContainerStarted","Data":"be5cc4f0c68db846550f668db5923ba62c9375bb29ef6b7b68216c08e8a067ef"} Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.852137 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-xdxtz" event={"ID":"27379ef9-6846-4d31-a33b-f1c6baaac6b3","Type":"ContainerStarted","Data":"d3f31dd3bfbc122390b5bd92c9ece221b5930239cb8e6d2531d797a6e25e1d21"} Jan 26 15:47:00 crc kubenswrapper[4713]: I0126 15:47:00.853156 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-72ppc" event={"ID":"e4913aa4-c0fe-4d3d-a5c3-64efb5c40291","Type":"ContainerStarted","Data":"8f872bc4a61453437879b25edbc6e77e57ed0e380b5aa0a7f987235cc818195d"} Jan 26 15:47:01 crc kubenswrapper[4713]: I0126 15:47:01.688399 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-77g4l" Jan 26 15:47:03 crc kubenswrapper[4713]: I0126 15:47:03.301243 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:47:03 crc kubenswrapper[4713]: I0126 15:47:03.301675 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:47:07 crc kubenswrapper[4713]: I0126 15:47:07.907564 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-72ppc" event={"ID":"e4913aa4-c0fe-4d3d-a5c3-64efb5c40291","Type":"ContainerStarted","Data":"a57c9fbee4fcb4956be55c86012d1e74647fb510b4bc96d49991a6cef00b4d40"} Jan 26 15:47:07 crc kubenswrapper[4713]: I0126 15:47:07.909384 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5zh6" event={"ID":"e155e55f-092c-426f-9667-fa1bf707ee5b","Type":"ContainerStarted","Data":"ffdffde59a8095aa30b569fb70479cd9b2b79ed59f5386282486d2cc6362fc10"} Jan 26 15:47:08 crc kubenswrapper[4713]: I0126 15:47:08.916128 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-xdxtz" event={"ID":"27379ef9-6846-4d31-a33b-f1c6baaac6b3","Type":"ContainerStarted","Data":"ef27e9ee36789b9124dacb6d65bc2a22a5ccb812f814452651d6eebf522bd026"} Jan 26 15:47:08 crc kubenswrapper[4713]: I0126 15:47:08.916628 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-xdxtz" Jan 26 15:47:08 crc kubenswrapper[4713]: I0126 15:47:08.933493 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-xdxtz" podStartSLOduration=2.483375972 podStartE2EDuration="9.93347544s" podCreationTimestamp="2026-01-26 15:46:59 +0000 UTC" firstStartedPulling="2026-01-26 15:47:00.835860071 +0000 UTC m=+795.972877316" lastFinishedPulling="2026-01-26 15:47:08.285959549 +0000 UTC m=+803.422976784" observedRunningTime="2026-01-26 15:47:08.930515013 +0000 UTC m=+804.067532258" watchObservedRunningTime="2026-01-26 15:47:08.93347544 +0000 UTC m=+804.070492675" Jan 26 15:47:08 crc kubenswrapper[4713]: I0126 15:47:08.949115 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-72ppc" podStartSLOduration=3.672756406 podStartE2EDuration="9.949092619s" podCreationTimestamp="2026-01-26 15:46:59 +0000 UTC" firstStartedPulling="2026-01-26 15:47:00.740537038 +0000 UTC m=+795.877554313" lastFinishedPulling="2026-01-26 15:47:07.016873291 +0000 UTC m=+802.153890526" observedRunningTime="2026-01-26 15:47:08.94742548 +0000 UTC m=+804.084442715" watchObservedRunningTime="2026-01-26 15:47:08.949092619 +0000 UTC m=+804.086109854" Jan 26 15:47:08 crc kubenswrapper[4713]: I0126 15:47:08.970782 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5zh6" podStartSLOduration=3.502983275 podStartE2EDuration="9.970757606s" podCreationTimestamp="2026-01-26 15:46:59 +0000 UTC" firstStartedPulling="2026-01-26 15:47:00.827622989 +0000 UTC m=+795.964640224" lastFinishedPulling="2026-01-26 15:47:07.29539731 +0000 UTC m=+802.432414555" observedRunningTime="2026-01-26 15:47:08.965478981 +0000 UTC m=+804.102496236" watchObservedRunningTime="2026-01-26 15:47:08.970757606 +0000 UTC m=+804.107774841" Jan 26 15:47:15 crc kubenswrapper[4713]: I0126 15:47:15.298095 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-xdxtz" Jan 26 15:47:33 crc kubenswrapper[4713]: I0126 15:47:33.301733 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:47:33 crc kubenswrapper[4713]: I0126 15:47:33.302313 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:47:33 crc kubenswrapper[4713]: I0126 15:47:33.302355 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:47:33 crc kubenswrapper[4713]: I0126 15:47:33.302813 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f32da0ac0a9f06d791f2d1090c2ad8ad38bcf46a578523616f1cb9902d73f6a"} pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:47:33 crc kubenswrapper[4713]: I0126 15:47:33.302862 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" containerID="cri-o://8f32da0ac0a9f06d791f2d1090c2ad8ad38bcf46a578523616f1cb9902d73f6a" gracePeriod=600 Jan 26 15:47:34 crc kubenswrapper[4713]: I0126 15:47:34.074840 4713 generic.go:334] "Generic (PLEG): container finished" podID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerID="8f32da0ac0a9f06d791f2d1090c2ad8ad38bcf46a578523616f1cb9902d73f6a" exitCode=0 Jan 26 15:47:34 crc kubenswrapper[4713]: I0126 15:47:34.074926 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerDied","Data":"8f32da0ac0a9f06d791f2d1090c2ad8ad38bcf46a578523616f1cb9902d73f6a"} Jan 26 15:47:34 crc kubenswrapper[4713]: I0126 15:47:34.075201 4713 scope.go:117] "RemoveContainer" containerID="3e0fa4d07dcfba7f5a3ed7a1e97bd343e126e54befe0b6192998369cbeb3fa98" Jan 26 15:47:35 crc kubenswrapper[4713]: I0126 15:47:35.085205 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"f3174ffab26223a39cf8575650c8eb910e6234e36fda4aca35e1d463b1d024ff"} Jan 26 15:47:49 crc kubenswrapper[4713]: I0126 15:47:49.412827 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p"] Jan 26 15:47:49 crc kubenswrapper[4713]: I0126 15:47:49.414642 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" Jan 26 15:47:49 crc kubenswrapper[4713]: I0126 15:47:49.416635 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 15:47:49 crc kubenswrapper[4713]: I0126 15:47:49.427182 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p"] Jan 26 15:47:49 crc kubenswrapper[4713]: I0126 15:47:49.511626 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ffd789b-98cd-4fd1-a531-95d329e68c9b-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p\" (UID: \"4ffd789b-98cd-4fd1-a531-95d329e68c9b\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" Jan 26 15:47:49 crc kubenswrapper[4713]: I0126 15:47:49.511782 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ffd789b-98cd-4fd1-a531-95d329e68c9b-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p\" (UID: \"4ffd789b-98cd-4fd1-a531-95d329e68c9b\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" Jan 26 15:47:49 crc kubenswrapper[4713]: I0126 15:47:49.511901 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrdkd\" (UniqueName: \"kubernetes.io/projected/4ffd789b-98cd-4fd1-a531-95d329e68c9b-kube-api-access-wrdkd\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p\" (UID: \"4ffd789b-98cd-4fd1-a531-95d329e68c9b\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" Jan 26 15:47:49 crc kubenswrapper[4713]: I0126 15:47:49.612959 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrdkd\" (UniqueName: \"kubernetes.io/projected/4ffd789b-98cd-4fd1-a531-95d329e68c9b-kube-api-access-wrdkd\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p\" (UID: \"4ffd789b-98cd-4fd1-a531-95d329e68c9b\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" Jan 26 15:47:49 crc kubenswrapper[4713]: I0126 15:47:49.613062 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ffd789b-98cd-4fd1-a531-95d329e68c9b-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p\" (UID: \"4ffd789b-98cd-4fd1-a531-95d329e68c9b\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" Jan 26 15:47:49 crc kubenswrapper[4713]: I0126 15:47:49.613108 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ffd789b-98cd-4fd1-a531-95d329e68c9b-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p\" (UID: \"4ffd789b-98cd-4fd1-a531-95d329e68c9b\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" Jan 26 15:47:49 crc kubenswrapper[4713]: I0126 15:47:49.613750 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ffd789b-98cd-4fd1-a531-95d329e68c9b-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p\" (UID: \"4ffd789b-98cd-4fd1-a531-95d329e68c9b\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" Jan 26 15:47:49 crc kubenswrapper[4713]: I0126 15:47:49.613831 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ffd789b-98cd-4fd1-a531-95d329e68c9b-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p\" (UID: \"4ffd789b-98cd-4fd1-a531-95d329e68c9b\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" Jan 26 15:47:49 crc kubenswrapper[4713]: I0126 15:47:49.644321 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrdkd\" (UniqueName: \"kubernetes.io/projected/4ffd789b-98cd-4fd1-a531-95d329e68c9b-kube-api-access-wrdkd\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p\" (UID: \"4ffd789b-98cd-4fd1-a531-95d329e68c9b\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" Jan 26 15:47:49 crc kubenswrapper[4713]: I0126 15:47:49.730493 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" Jan 26 15:47:49 crc kubenswrapper[4713]: I0126 15:47:49.989250 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p"] Jan 26 15:47:49 crc kubenswrapper[4713]: W0126 15:47:49.994104 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ffd789b_98cd_4fd1_a531_95d329e68c9b.slice/crio-0f20d36061fdf64e279e07cf393e8c1d931a63c1754ee0d7a150150faffc8727 WatchSource:0}: Error finding container 0f20d36061fdf64e279e07cf393e8c1d931a63c1754ee0d7a150150faffc8727: Status 404 returned error can't find the container with id 0f20d36061fdf64e279e07cf393e8c1d931a63c1754ee0d7a150150faffc8727 Jan 26 15:47:50 crc kubenswrapper[4713]: I0126 15:47:50.167939 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" event={"ID":"4ffd789b-98cd-4fd1-a531-95d329e68c9b","Type":"ContainerStarted","Data":"e493ecd7319c10e8209b01fdb1c04222b9fcf7da81ef330c12afa569fe39475b"} Jan 26 15:47:50 crc kubenswrapper[4713]: I0126 15:47:50.167976 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" event={"ID":"4ffd789b-98cd-4fd1-a531-95d329e68c9b","Type":"ContainerStarted","Data":"0f20d36061fdf64e279e07cf393e8c1d931a63c1754ee0d7a150150faffc8727"} Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.176545 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ffd789b-98cd-4fd1-a531-95d329e68c9b" containerID="e493ecd7319c10e8209b01fdb1c04222b9fcf7da81ef330c12afa569fe39475b" exitCode=0 Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.176903 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" event={"ID":"4ffd789b-98cd-4fd1-a531-95d329e68c9b","Type":"ContainerDied","Data":"e493ecd7319c10e8209b01fdb1c04222b9fcf7da81ef330c12afa569fe39475b"} Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.351288 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.352206 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.354306 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.354602 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.355038 4713 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-f2rrq" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.364383 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.436382 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dde636d3-9fc5-4c92-98d6-c782240ff209\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dde636d3-9fc5-4c92-98d6-c782240ff209\") pod \"minio\" (UID: \"cb6eceb4-72aa-49cf-8bc4-ac07f28ee6f6\") " pod="minio-dev/minio" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.436445 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxws9\" (UniqueName: \"kubernetes.io/projected/cb6eceb4-72aa-49cf-8bc4-ac07f28ee6f6-kube-api-access-gxws9\") pod \"minio\" (UID: \"cb6eceb4-72aa-49cf-8bc4-ac07f28ee6f6\") " pod="minio-dev/minio" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.537783 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dde636d3-9fc5-4c92-98d6-c782240ff209\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dde636d3-9fc5-4c92-98d6-c782240ff209\") pod \"minio\" (UID: \"cb6eceb4-72aa-49cf-8bc4-ac07f28ee6f6\") " pod="minio-dev/minio" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.537836 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxws9\" (UniqueName: \"kubernetes.io/projected/cb6eceb4-72aa-49cf-8bc4-ac07f28ee6f6-kube-api-access-gxws9\") pod \"minio\" (UID: \"cb6eceb4-72aa-49cf-8bc4-ac07f28ee6f6\") " pod="minio-dev/minio" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.545639 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.545678 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dde636d3-9fc5-4c92-98d6-c782240ff209\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dde636d3-9fc5-4c92-98d6-c782240ff209\") pod \"minio\" (UID: \"cb6eceb4-72aa-49cf-8bc4-ac07f28ee6f6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94c7d8f34479bab2e3bc3a3800cc32a208fa36034404476e0c792045dd8153cc/globalmount\"" pod="minio-dev/minio" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.566846 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxws9\" (UniqueName: \"kubernetes.io/projected/cb6eceb4-72aa-49cf-8bc4-ac07f28ee6f6-kube-api-access-gxws9\") pod \"minio\" (UID: \"cb6eceb4-72aa-49cf-8bc4-ac07f28ee6f6\") " pod="minio-dev/minio" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.584317 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dde636d3-9fc5-4c92-98d6-c782240ff209\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dde636d3-9fc5-4c92-98d6-c782240ff209\") pod \"minio\" (UID: \"cb6eceb4-72aa-49cf-8bc4-ac07f28ee6f6\") " pod="minio-dev/minio" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.667293 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.783520 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-csrsd"] Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.785826 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.788900 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-csrsd"] Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.843222 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2e4bf9a-6bb7-4127-9929-79af11215ab6-utilities\") pod \"redhat-operators-csrsd\" (UID: \"c2e4bf9a-6bb7-4127-9929-79af11215ab6\") " pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.843268 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2e4bf9a-6bb7-4127-9929-79af11215ab6-catalog-content\") pod \"redhat-operators-csrsd\" (UID: \"c2e4bf9a-6bb7-4127-9929-79af11215ab6\") " pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.843317 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zst5q\" (UniqueName: \"kubernetes.io/projected/c2e4bf9a-6bb7-4127-9929-79af11215ab6-kube-api-access-zst5q\") pod \"redhat-operators-csrsd\" (UID: \"c2e4bf9a-6bb7-4127-9929-79af11215ab6\") " pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.929017 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.944204 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2e4bf9a-6bb7-4127-9929-79af11215ab6-utilities\") pod \"redhat-operators-csrsd\" (UID: \"c2e4bf9a-6bb7-4127-9929-79af11215ab6\") " pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.944252 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2e4bf9a-6bb7-4127-9929-79af11215ab6-catalog-content\") pod \"redhat-operators-csrsd\" (UID: \"c2e4bf9a-6bb7-4127-9929-79af11215ab6\") " pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.944285 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zst5q\" (UniqueName: \"kubernetes.io/projected/c2e4bf9a-6bb7-4127-9929-79af11215ab6-kube-api-access-zst5q\") pod \"redhat-operators-csrsd\" (UID: \"c2e4bf9a-6bb7-4127-9929-79af11215ab6\") " pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.944794 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2e4bf9a-6bb7-4127-9929-79af11215ab6-utilities\") pod \"redhat-operators-csrsd\" (UID: \"c2e4bf9a-6bb7-4127-9929-79af11215ab6\") " pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.944857 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2e4bf9a-6bb7-4127-9929-79af11215ab6-catalog-content\") pod \"redhat-operators-csrsd\" (UID: \"c2e4bf9a-6bb7-4127-9929-79af11215ab6\") " pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:47:51 crc kubenswrapper[4713]: I0126 15:47:51.964966 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zst5q\" (UniqueName: \"kubernetes.io/projected/c2e4bf9a-6bb7-4127-9929-79af11215ab6-kube-api-access-zst5q\") pod \"redhat-operators-csrsd\" (UID: \"c2e4bf9a-6bb7-4127-9929-79af11215ab6\") " pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:47:52 crc kubenswrapper[4713]: I0126 15:47:52.106007 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:47:52 crc kubenswrapper[4713]: I0126 15:47:52.190736 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"cb6eceb4-72aa-49cf-8bc4-ac07f28ee6f6","Type":"ContainerStarted","Data":"17610f5ef3187fd2359ffe9107a30eaf97aade68c7d5996eb02473231e19c29a"} Jan 26 15:47:52 crc kubenswrapper[4713]: I0126 15:47:52.353337 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-csrsd"] Jan 26 15:47:52 crc kubenswrapper[4713]: W0126 15:47:52.361149 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2e4bf9a_6bb7_4127_9929_79af11215ab6.slice/crio-2c04f18b22ba3287260d953db74c23d1c33f8f2a294c31c85b263b8fe34276c7 WatchSource:0}: Error finding container 2c04f18b22ba3287260d953db74c23d1c33f8f2a294c31c85b263b8fe34276c7: Status 404 returned error can't find the container with id 2c04f18b22ba3287260d953db74c23d1c33f8f2a294c31c85b263b8fe34276c7 Jan 26 15:47:53 crc kubenswrapper[4713]: I0126 15:47:53.197378 4713 generic.go:334] "Generic (PLEG): container finished" podID="c2e4bf9a-6bb7-4127-9929-79af11215ab6" containerID="9d8b425bc3d7e43105f8d3b71a9ba9e8e88d4caa54b48c202571b7d251bb3a8b" exitCode=0 Jan 26 15:47:53 crc kubenswrapper[4713]: I0126 15:47:53.197416 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-csrsd" event={"ID":"c2e4bf9a-6bb7-4127-9929-79af11215ab6","Type":"ContainerDied","Data":"9d8b425bc3d7e43105f8d3b71a9ba9e8e88d4caa54b48c202571b7d251bb3a8b"} Jan 26 15:47:53 crc kubenswrapper[4713]: I0126 15:47:53.197439 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-csrsd" event={"ID":"c2e4bf9a-6bb7-4127-9929-79af11215ab6","Type":"ContainerStarted","Data":"2c04f18b22ba3287260d953db74c23d1c33f8f2a294c31c85b263b8fe34276c7"} Jan 26 15:47:58 crc kubenswrapper[4713]: I0126 15:47:58.229229 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-csrsd" event={"ID":"c2e4bf9a-6bb7-4127-9929-79af11215ab6","Type":"ContainerStarted","Data":"2e11a78bcfb03dc58df4a71469e60c7ff8f06ff2dff8fb25ebdea9b7ee323dc9"} Jan 26 15:47:58 crc kubenswrapper[4713]: I0126 15:47:58.232526 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ffd789b-98cd-4fd1-a531-95d329e68c9b" containerID="86fab080ff1b92385fba48702b5f53b66ce5dc1d0e32da01168a235d0b7f35cf" exitCode=0 Jan 26 15:47:58 crc kubenswrapper[4713]: I0126 15:47:58.232559 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" event={"ID":"4ffd789b-98cd-4fd1-a531-95d329e68c9b","Type":"ContainerDied","Data":"86fab080ff1b92385fba48702b5f53b66ce5dc1d0e32da01168a235d0b7f35cf"} Jan 26 15:47:59 crc kubenswrapper[4713]: I0126 15:47:59.240802 4713 generic.go:334] "Generic (PLEG): container finished" podID="4ffd789b-98cd-4fd1-a531-95d329e68c9b" containerID="6ebbb4e8fcb99a44f4985cb5c381f0f16c8fc5f1b1c03587cfaf2923d48b300e" exitCode=0 Jan 26 15:47:59 crc kubenswrapper[4713]: I0126 15:47:59.241248 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" event={"ID":"4ffd789b-98cd-4fd1-a531-95d329e68c9b","Type":"ContainerDied","Data":"6ebbb4e8fcb99a44f4985cb5c381f0f16c8fc5f1b1c03587cfaf2923d48b300e"} Jan 26 15:47:59 crc kubenswrapper[4713]: I0126 15:47:59.244045 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"cb6eceb4-72aa-49cf-8bc4-ac07f28ee6f6","Type":"ContainerStarted","Data":"6e22fded4712f5471fd05a2fe7d7b44ab04a1ff9aac09d178053166cbc1e8fc9"} Jan 26 15:47:59 crc kubenswrapper[4713]: I0126 15:47:59.283927 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.105224953 podStartE2EDuration="10.283903118s" podCreationTimestamp="2026-01-26 15:47:49 +0000 UTC" firstStartedPulling="2026-01-26 15:47:51.937103678 +0000 UTC m=+847.074120913" lastFinishedPulling="2026-01-26 15:47:58.115781843 +0000 UTC m=+853.252799078" observedRunningTime="2026-01-26 15:47:59.279753108 +0000 UTC m=+854.416770353" watchObservedRunningTime="2026-01-26 15:47:59.283903118 +0000 UTC m=+854.420920353" Jan 26 15:48:00 crc kubenswrapper[4713]: I0126 15:48:00.254567 4713 generic.go:334] "Generic (PLEG): container finished" podID="c2e4bf9a-6bb7-4127-9929-79af11215ab6" containerID="2e11a78bcfb03dc58df4a71469e60c7ff8f06ff2dff8fb25ebdea9b7ee323dc9" exitCode=0 Jan 26 15:48:00 crc kubenswrapper[4713]: I0126 15:48:00.254637 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-csrsd" event={"ID":"c2e4bf9a-6bb7-4127-9929-79af11215ab6","Type":"ContainerDied","Data":"2e11a78bcfb03dc58df4a71469e60c7ff8f06ff2dff8fb25ebdea9b7ee323dc9"} Jan 26 15:48:00 crc kubenswrapper[4713]: I0126 15:48:00.785579 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" Jan 26 15:48:00 crc kubenswrapper[4713]: I0126 15:48:00.885128 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ffd789b-98cd-4fd1-a531-95d329e68c9b-bundle\") pod \"4ffd789b-98cd-4fd1-a531-95d329e68c9b\" (UID: \"4ffd789b-98cd-4fd1-a531-95d329e68c9b\") " Jan 26 15:48:00 crc kubenswrapper[4713]: I0126 15:48:00.885228 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrdkd\" (UniqueName: \"kubernetes.io/projected/4ffd789b-98cd-4fd1-a531-95d329e68c9b-kube-api-access-wrdkd\") pod \"4ffd789b-98cd-4fd1-a531-95d329e68c9b\" (UID: \"4ffd789b-98cd-4fd1-a531-95d329e68c9b\") " Jan 26 15:48:00 crc kubenswrapper[4713]: I0126 15:48:00.885321 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ffd789b-98cd-4fd1-a531-95d329e68c9b-util\") pod \"4ffd789b-98cd-4fd1-a531-95d329e68c9b\" (UID: \"4ffd789b-98cd-4fd1-a531-95d329e68c9b\") " Jan 26 15:48:00 crc kubenswrapper[4713]: I0126 15:48:00.886287 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ffd789b-98cd-4fd1-a531-95d329e68c9b-bundle" (OuterVolumeSpecName: "bundle") pod "4ffd789b-98cd-4fd1-a531-95d329e68c9b" (UID: "4ffd789b-98cd-4fd1-a531-95d329e68c9b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:48:00 crc kubenswrapper[4713]: I0126 15:48:00.891128 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ffd789b-98cd-4fd1-a531-95d329e68c9b-kube-api-access-wrdkd" (OuterVolumeSpecName: "kube-api-access-wrdkd") pod "4ffd789b-98cd-4fd1-a531-95d329e68c9b" (UID: "4ffd789b-98cd-4fd1-a531-95d329e68c9b"). InnerVolumeSpecName "kube-api-access-wrdkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:48:00 crc kubenswrapper[4713]: I0126 15:48:00.896851 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ffd789b-98cd-4fd1-a531-95d329e68c9b-util" (OuterVolumeSpecName: "util") pod "4ffd789b-98cd-4fd1-a531-95d329e68c9b" (UID: "4ffd789b-98cd-4fd1-a531-95d329e68c9b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:48:00 crc kubenswrapper[4713]: I0126 15:48:00.986967 4713 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ffd789b-98cd-4fd1-a531-95d329e68c9b-util\") on node \"crc\" DevicePath \"\"" Jan 26 15:48:00 crc kubenswrapper[4713]: I0126 15:48:00.987001 4713 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ffd789b-98cd-4fd1-a531-95d329e68c9b-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:48:00 crc kubenswrapper[4713]: I0126 15:48:00.987010 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrdkd\" (UniqueName: \"kubernetes.io/projected/4ffd789b-98cd-4fd1-a531-95d329e68c9b-kube-api-access-wrdkd\") on node \"crc\" DevicePath \"\"" Jan 26 15:48:01 crc kubenswrapper[4713]: I0126 15:48:01.264670 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" event={"ID":"4ffd789b-98cd-4fd1-a531-95d329e68c9b","Type":"ContainerDied","Data":"0f20d36061fdf64e279e07cf393e8c1d931a63c1754ee0d7a150150faffc8727"} Jan 26 15:48:01 crc kubenswrapper[4713]: I0126 15:48:01.264712 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p" Jan 26 15:48:01 crc kubenswrapper[4713]: I0126 15:48:01.264719 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f20d36061fdf64e279e07cf393e8c1d931a63c1754ee0d7a150150faffc8727" Jan 26 15:48:01 crc kubenswrapper[4713]: I0126 15:48:01.267706 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-csrsd" event={"ID":"c2e4bf9a-6bb7-4127-9929-79af11215ab6","Type":"ContainerStarted","Data":"8c60595906710367366d9085a298323557104cd4dd4685194c184b79b3b06297"} Jan 26 15:48:01 crc kubenswrapper[4713]: I0126 15:48:01.291059 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-csrsd" podStartSLOduration=3.484370626 podStartE2EDuration="10.2910389s" podCreationTimestamp="2026-01-26 15:47:51 +0000 UTC" firstStartedPulling="2026-01-26 15:47:53.894047829 +0000 UTC m=+849.031065064" lastFinishedPulling="2026-01-26 15:48:00.700716093 +0000 UTC m=+855.837733338" observedRunningTime="2026-01-26 15:48:01.287834847 +0000 UTC m=+856.424852132" watchObservedRunningTime="2026-01-26 15:48:01.2910389 +0000 UTC m=+856.428056135" Jan 26 15:48:02 crc kubenswrapper[4713]: I0126 15:48:02.106247 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:48:02 crc kubenswrapper[4713]: I0126 15:48:02.106323 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:48:03 crc kubenswrapper[4713]: I0126 15:48:03.146246 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-csrsd" podUID="c2e4bf9a-6bb7-4127-9929-79af11215ab6" containerName="registry-server" probeResult="failure" output=< Jan 26 15:48:03 crc kubenswrapper[4713]: timeout: failed to connect service ":50051" within 1s Jan 26 15:48:03 crc kubenswrapper[4713]: > Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.242898 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp"] Jan 26 15:48:09 crc kubenswrapper[4713]: E0126 15:48:09.243530 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffd789b-98cd-4fd1-a531-95d329e68c9b" containerName="pull" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.243542 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffd789b-98cd-4fd1-a531-95d329e68c9b" containerName="pull" Jan 26 15:48:09 crc kubenswrapper[4713]: E0126 15:48:09.243563 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffd789b-98cd-4fd1-a531-95d329e68c9b" containerName="util" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.243569 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffd789b-98cd-4fd1-a531-95d329e68c9b" containerName="util" Jan 26 15:48:09 crc kubenswrapper[4713]: E0126 15:48:09.243580 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffd789b-98cd-4fd1-a531-95d329e68c9b" containerName="extract" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.243586 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffd789b-98cd-4fd1-a531-95d329e68c9b" containerName="extract" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.243675 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ffd789b-98cd-4fd1-a531-95d329e68c9b" containerName="extract" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.244262 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.246510 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.246922 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.247050 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-j4dvx" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.247051 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.247280 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.249441 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.297532 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp"] Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.390495 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e2e9f61c-c80e-443b-9175-15f2dcfaba60-webhook-cert\") pod \"loki-operator-controller-manager-685487c794-sjbsp\" (UID: \"e2e9f61c-c80e-443b-9175-15f2dcfaba60\") " pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.390991 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llj9l\" (UniqueName: \"kubernetes.io/projected/e2e9f61c-c80e-443b-9175-15f2dcfaba60-kube-api-access-llj9l\") pod \"loki-operator-controller-manager-685487c794-sjbsp\" (UID: \"e2e9f61c-c80e-443b-9175-15f2dcfaba60\") " pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.391092 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e2e9f61c-c80e-443b-9175-15f2dcfaba60-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-685487c794-sjbsp\" (UID: \"e2e9f61c-c80e-443b-9175-15f2dcfaba60\") " pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.391145 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e2e9f61c-c80e-443b-9175-15f2dcfaba60-apiservice-cert\") pod \"loki-operator-controller-manager-685487c794-sjbsp\" (UID: \"e2e9f61c-c80e-443b-9175-15f2dcfaba60\") " pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.391303 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e2e9f61c-c80e-443b-9175-15f2dcfaba60-manager-config\") pod \"loki-operator-controller-manager-685487c794-sjbsp\" (UID: \"e2e9f61c-c80e-443b-9175-15f2dcfaba60\") " pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.492479 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e2e9f61c-c80e-443b-9175-15f2dcfaba60-manager-config\") pod \"loki-operator-controller-manager-685487c794-sjbsp\" (UID: \"e2e9f61c-c80e-443b-9175-15f2dcfaba60\") " pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.492759 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e2e9f61c-c80e-443b-9175-15f2dcfaba60-webhook-cert\") pod \"loki-operator-controller-manager-685487c794-sjbsp\" (UID: \"e2e9f61c-c80e-443b-9175-15f2dcfaba60\") " pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.492926 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llj9l\" (UniqueName: \"kubernetes.io/projected/e2e9f61c-c80e-443b-9175-15f2dcfaba60-kube-api-access-llj9l\") pod \"loki-operator-controller-manager-685487c794-sjbsp\" (UID: \"e2e9f61c-c80e-443b-9175-15f2dcfaba60\") " pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.493041 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e2e9f61c-c80e-443b-9175-15f2dcfaba60-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-685487c794-sjbsp\" (UID: \"e2e9f61c-c80e-443b-9175-15f2dcfaba60\") " pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.493145 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e2e9f61c-c80e-443b-9175-15f2dcfaba60-apiservice-cert\") pod \"loki-operator-controller-manager-685487c794-sjbsp\" (UID: \"e2e9f61c-c80e-443b-9175-15f2dcfaba60\") " pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.493483 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e2e9f61c-c80e-443b-9175-15f2dcfaba60-manager-config\") pod \"loki-operator-controller-manager-685487c794-sjbsp\" (UID: \"e2e9f61c-c80e-443b-9175-15f2dcfaba60\") " pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.500299 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e2e9f61c-c80e-443b-9175-15f2dcfaba60-apiservice-cert\") pod \"loki-operator-controller-manager-685487c794-sjbsp\" (UID: \"e2e9f61c-c80e-443b-9175-15f2dcfaba60\") " pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.503441 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e2e9f61c-c80e-443b-9175-15f2dcfaba60-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-685487c794-sjbsp\" (UID: \"e2e9f61c-c80e-443b-9175-15f2dcfaba60\") " pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.519275 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e2e9f61c-c80e-443b-9175-15f2dcfaba60-webhook-cert\") pod \"loki-operator-controller-manager-685487c794-sjbsp\" (UID: \"e2e9f61c-c80e-443b-9175-15f2dcfaba60\") " pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.529436 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llj9l\" (UniqueName: \"kubernetes.io/projected/e2e9f61c-c80e-443b-9175-15f2dcfaba60-kube-api-access-llj9l\") pod \"loki-operator-controller-manager-685487c794-sjbsp\" (UID: \"e2e9f61c-c80e-443b-9175-15f2dcfaba60\") " pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:09 crc kubenswrapper[4713]: I0126 15:48:09.559447 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:10 crc kubenswrapper[4713]: I0126 15:48:10.256448 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp"] Jan 26 15:48:10 crc kubenswrapper[4713]: I0126 15:48:10.331152 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" event={"ID":"e2e9f61c-c80e-443b-9175-15f2dcfaba60","Type":"ContainerStarted","Data":"1e67346694fb79520f2706a475d8cf98af0458ee0fe5a529fda40eb15fef7972"} Jan 26 15:48:12 crc kubenswrapper[4713]: I0126 15:48:12.192662 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:48:12 crc kubenswrapper[4713]: I0126 15:48:12.237119 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:48:12 crc kubenswrapper[4713]: I0126 15:48:12.807440 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-csrsd"] Jan 26 15:48:13 crc kubenswrapper[4713]: I0126 15:48:13.347820 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-csrsd" podUID="c2e4bf9a-6bb7-4127-9929-79af11215ab6" containerName="registry-server" containerID="cri-o://8c60595906710367366d9085a298323557104cd4dd4685194c184b79b3b06297" gracePeriod=2 Jan 26 15:48:13 crc kubenswrapper[4713]: I0126 15:48:13.771695 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:48:13 crc kubenswrapper[4713]: I0126 15:48:13.881167 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zst5q\" (UniqueName: \"kubernetes.io/projected/c2e4bf9a-6bb7-4127-9929-79af11215ab6-kube-api-access-zst5q\") pod \"c2e4bf9a-6bb7-4127-9929-79af11215ab6\" (UID: \"c2e4bf9a-6bb7-4127-9929-79af11215ab6\") " Jan 26 15:48:13 crc kubenswrapper[4713]: I0126 15:48:13.881291 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2e4bf9a-6bb7-4127-9929-79af11215ab6-catalog-content\") pod \"c2e4bf9a-6bb7-4127-9929-79af11215ab6\" (UID: \"c2e4bf9a-6bb7-4127-9929-79af11215ab6\") " Jan 26 15:48:13 crc kubenswrapper[4713]: I0126 15:48:13.881520 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2e4bf9a-6bb7-4127-9929-79af11215ab6-utilities\") pod \"c2e4bf9a-6bb7-4127-9929-79af11215ab6\" (UID: \"c2e4bf9a-6bb7-4127-9929-79af11215ab6\") " Jan 26 15:48:13 crc kubenswrapper[4713]: I0126 15:48:13.884311 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2e4bf9a-6bb7-4127-9929-79af11215ab6-utilities" (OuterVolumeSpecName: "utilities") pod "c2e4bf9a-6bb7-4127-9929-79af11215ab6" (UID: "c2e4bf9a-6bb7-4127-9929-79af11215ab6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:48:13 crc kubenswrapper[4713]: I0126 15:48:13.890424 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2e4bf9a-6bb7-4127-9929-79af11215ab6-kube-api-access-zst5q" (OuterVolumeSpecName: "kube-api-access-zst5q") pod "c2e4bf9a-6bb7-4127-9929-79af11215ab6" (UID: "c2e4bf9a-6bb7-4127-9929-79af11215ab6"). InnerVolumeSpecName "kube-api-access-zst5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:48:13 crc kubenswrapper[4713]: I0126 15:48:13.984154 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2e4bf9a-6bb7-4127-9929-79af11215ab6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:48:13 crc kubenswrapper[4713]: I0126 15:48:13.984509 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zst5q\" (UniqueName: \"kubernetes.io/projected/c2e4bf9a-6bb7-4127-9929-79af11215ab6-kube-api-access-zst5q\") on node \"crc\" DevicePath \"\"" Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.001877 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2e4bf9a-6bb7-4127-9929-79af11215ab6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2e4bf9a-6bb7-4127-9929-79af11215ab6" (UID: "c2e4bf9a-6bb7-4127-9929-79af11215ab6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.085562 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2e4bf9a-6bb7-4127-9929-79af11215ab6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.357006 4713 generic.go:334] "Generic (PLEG): container finished" podID="c2e4bf9a-6bb7-4127-9929-79af11215ab6" containerID="8c60595906710367366d9085a298323557104cd4dd4685194c184b79b3b06297" exitCode=0 Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.357046 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-csrsd" event={"ID":"c2e4bf9a-6bb7-4127-9929-79af11215ab6","Type":"ContainerDied","Data":"8c60595906710367366d9085a298323557104cd4dd4685194c184b79b3b06297"} Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.357066 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-csrsd" Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.357117 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-csrsd" event={"ID":"c2e4bf9a-6bb7-4127-9929-79af11215ab6","Type":"ContainerDied","Data":"2c04f18b22ba3287260d953db74c23d1c33f8f2a294c31c85b263b8fe34276c7"} Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.357138 4713 scope.go:117] "RemoveContainer" containerID="8c60595906710367366d9085a298323557104cd4dd4685194c184b79b3b06297" Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.386429 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-csrsd"] Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.392798 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-csrsd"] Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.392970 4713 scope.go:117] "RemoveContainer" containerID="2e11a78bcfb03dc58df4a71469e60c7ff8f06ff2dff8fb25ebdea9b7ee323dc9" Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.414745 4713 scope.go:117] "RemoveContainer" containerID="9d8b425bc3d7e43105f8d3b71a9ba9e8e88d4caa54b48c202571b7d251bb3a8b" Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.430192 4713 scope.go:117] "RemoveContainer" containerID="8c60595906710367366d9085a298323557104cd4dd4685194c184b79b3b06297" Jan 26 15:48:14 crc kubenswrapper[4713]: E0126 15:48:14.430712 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c60595906710367366d9085a298323557104cd4dd4685194c184b79b3b06297\": container with ID starting with 8c60595906710367366d9085a298323557104cd4dd4685194c184b79b3b06297 not found: ID does not exist" containerID="8c60595906710367366d9085a298323557104cd4dd4685194c184b79b3b06297" Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.430754 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c60595906710367366d9085a298323557104cd4dd4685194c184b79b3b06297"} err="failed to get container status \"8c60595906710367366d9085a298323557104cd4dd4685194c184b79b3b06297\": rpc error: code = NotFound desc = could not find container \"8c60595906710367366d9085a298323557104cd4dd4685194c184b79b3b06297\": container with ID starting with 8c60595906710367366d9085a298323557104cd4dd4685194c184b79b3b06297 not found: ID does not exist" Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.430779 4713 scope.go:117] "RemoveContainer" containerID="2e11a78bcfb03dc58df4a71469e60c7ff8f06ff2dff8fb25ebdea9b7ee323dc9" Jan 26 15:48:14 crc kubenswrapper[4713]: E0126 15:48:14.431204 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e11a78bcfb03dc58df4a71469e60c7ff8f06ff2dff8fb25ebdea9b7ee323dc9\": container with ID starting with 2e11a78bcfb03dc58df4a71469e60c7ff8f06ff2dff8fb25ebdea9b7ee323dc9 not found: ID does not exist" containerID="2e11a78bcfb03dc58df4a71469e60c7ff8f06ff2dff8fb25ebdea9b7ee323dc9" Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.431242 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e11a78bcfb03dc58df4a71469e60c7ff8f06ff2dff8fb25ebdea9b7ee323dc9"} err="failed to get container status \"2e11a78bcfb03dc58df4a71469e60c7ff8f06ff2dff8fb25ebdea9b7ee323dc9\": rpc error: code = NotFound desc = could not find container \"2e11a78bcfb03dc58df4a71469e60c7ff8f06ff2dff8fb25ebdea9b7ee323dc9\": container with ID starting with 2e11a78bcfb03dc58df4a71469e60c7ff8f06ff2dff8fb25ebdea9b7ee323dc9 not found: ID does not exist" Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.431270 4713 scope.go:117] "RemoveContainer" containerID="9d8b425bc3d7e43105f8d3b71a9ba9e8e88d4caa54b48c202571b7d251bb3a8b" Jan 26 15:48:14 crc kubenswrapper[4713]: E0126 15:48:14.431722 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d8b425bc3d7e43105f8d3b71a9ba9e8e88d4caa54b48c202571b7d251bb3a8b\": container with ID starting with 9d8b425bc3d7e43105f8d3b71a9ba9e8e88d4caa54b48c202571b7d251bb3a8b not found: ID does not exist" containerID="9d8b425bc3d7e43105f8d3b71a9ba9e8e88d4caa54b48c202571b7d251bb3a8b" Jan 26 15:48:14 crc kubenswrapper[4713]: I0126 15:48:14.431753 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8b425bc3d7e43105f8d3b71a9ba9e8e88d4caa54b48c202571b7d251bb3a8b"} err="failed to get container status \"9d8b425bc3d7e43105f8d3b71a9ba9e8e88d4caa54b48c202571b7d251bb3a8b\": rpc error: code = NotFound desc = could not find container \"9d8b425bc3d7e43105f8d3b71a9ba9e8e88d4caa54b48c202571b7d251bb3a8b\": container with ID starting with 9d8b425bc3d7e43105f8d3b71a9ba9e8e88d4caa54b48c202571b7d251bb3a8b not found: ID does not exist" Jan 26 15:48:15 crc kubenswrapper[4713]: I0126 15:48:15.813832 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2e4bf9a-6bb7-4127-9929-79af11215ab6" path="/var/lib/kubelet/pods/c2e4bf9a-6bb7-4127-9929-79af11215ab6/volumes" Jan 26 15:48:17 crc kubenswrapper[4713]: I0126 15:48:17.386567 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" event={"ID":"e2e9f61c-c80e-443b-9175-15f2dcfaba60","Type":"ContainerStarted","Data":"00150881e8cbfbf6a3964e1a4995f0075b311e1a332fbf37e2e418728933ea98"} Jan 26 15:48:23 crc kubenswrapper[4713]: I0126 15:48:23.435434 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" event={"ID":"e2e9f61c-c80e-443b-9175-15f2dcfaba60","Type":"ContainerStarted","Data":"ac287f2f8905c75f71ad1caa11a769a5a73fd540f462b9324a1a1999695b0d25"} Jan 26 15:48:23 crc kubenswrapper[4713]: I0126 15:48:23.436067 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:23 crc kubenswrapper[4713]: I0126 15:48:23.438131 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" Jan 26 15:48:23 crc kubenswrapper[4713]: I0126 15:48:23.464351 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-685487c794-sjbsp" podStartSLOduration=1.519545077 podStartE2EDuration="14.464327784s" podCreationTimestamp="2026-01-26 15:48:09 +0000 UTC" firstStartedPulling="2026-01-26 15:48:10.274446141 +0000 UTC m=+865.411463376" lastFinishedPulling="2026-01-26 15:48:23.219228848 +0000 UTC m=+878.356246083" observedRunningTime="2026-01-26 15:48:23.455899201 +0000 UTC m=+878.592916466" watchObservedRunningTime="2026-01-26 15:48:23.464327784 +0000 UTC m=+878.601345029" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.658787 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl"] Jan 26 15:49:10 crc kubenswrapper[4713]: E0126 15:49:10.659666 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e4bf9a-6bb7-4127-9929-79af11215ab6" containerName="registry-server" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.659683 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e4bf9a-6bb7-4127-9929-79af11215ab6" containerName="registry-server" Jan 26 15:49:10 crc kubenswrapper[4713]: E0126 15:49:10.659702 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e4bf9a-6bb7-4127-9929-79af11215ab6" containerName="extract-content" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.659711 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e4bf9a-6bb7-4127-9929-79af11215ab6" containerName="extract-content" Jan 26 15:49:10 crc kubenswrapper[4713]: E0126 15:49:10.659731 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e4bf9a-6bb7-4127-9929-79af11215ab6" containerName="extract-utilities" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.659738 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e4bf9a-6bb7-4127-9929-79af11215ab6" containerName="extract-utilities" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.659857 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e4bf9a-6bb7-4127-9929-79af11215ab6" containerName="registry-server" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.660849 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.663993 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.673187 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl"] Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.757745 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mjqc\" (UniqueName: \"kubernetes.io/projected/9323c729-cd29-40a2-9ed3-49844ca9e66c-kube-api-access-2mjqc\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl\" (UID: \"9323c729-cd29-40a2-9ed3-49844ca9e66c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.757815 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9323c729-cd29-40a2-9ed3-49844ca9e66c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl\" (UID: \"9323c729-cd29-40a2-9ed3-49844ca9e66c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.757866 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9323c729-cd29-40a2-9ed3-49844ca9e66c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl\" (UID: \"9323c729-cd29-40a2-9ed3-49844ca9e66c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.859719 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mjqc\" (UniqueName: \"kubernetes.io/projected/9323c729-cd29-40a2-9ed3-49844ca9e66c-kube-api-access-2mjqc\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl\" (UID: \"9323c729-cd29-40a2-9ed3-49844ca9e66c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.859874 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9323c729-cd29-40a2-9ed3-49844ca9e66c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl\" (UID: \"9323c729-cd29-40a2-9ed3-49844ca9e66c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.859913 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9323c729-cd29-40a2-9ed3-49844ca9e66c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl\" (UID: \"9323c729-cd29-40a2-9ed3-49844ca9e66c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.861207 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9323c729-cd29-40a2-9ed3-49844ca9e66c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl\" (UID: \"9323c729-cd29-40a2-9ed3-49844ca9e66c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.861591 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9323c729-cd29-40a2-9ed3-49844ca9e66c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl\" (UID: \"9323c729-cd29-40a2-9ed3-49844ca9e66c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.881911 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mjqc\" (UniqueName: \"kubernetes.io/projected/9323c729-cd29-40a2-9ed3-49844ca9e66c-kube-api-access-2mjqc\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl\" (UID: \"9323c729-cd29-40a2-9ed3-49844ca9e66c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" Jan 26 15:49:10 crc kubenswrapper[4713]: I0126 15:49:10.978958 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" Jan 26 15:49:11 crc kubenswrapper[4713]: I0126 15:49:11.405517 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl"] Jan 26 15:49:11 crc kubenswrapper[4713]: I0126 15:49:11.717882 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" event={"ID":"9323c729-cd29-40a2-9ed3-49844ca9e66c","Type":"ContainerStarted","Data":"82087fb99a0b0e85f73d128665cd063e68102ebce19e9aa63a0950dae3ee90d2"} Jan 26 15:49:11 crc kubenswrapper[4713]: I0126 15:49:11.717923 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" event={"ID":"9323c729-cd29-40a2-9ed3-49844ca9e66c","Type":"ContainerStarted","Data":"c4a2388295ac7ef6f18893186ec0823177b485d67660996df88b831a355e37e9"} Jan 26 15:49:12 crc kubenswrapper[4713]: I0126 15:49:12.725887 4713 generic.go:334] "Generic (PLEG): container finished" podID="9323c729-cd29-40a2-9ed3-49844ca9e66c" containerID="82087fb99a0b0e85f73d128665cd063e68102ebce19e9aa63a0950dae3ee90d2" exitCode=0 Jan 26 15:49:12 crc kubenswrapper[4713]: I0126 15:49:12.725991 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" event={"ID":"9323c729-cd29-40a2-9ed3-49844ca9e66c","Type":"ContainerDied","Data":"82087fb99a0b0e85f73d128665cd063e68102ebce19e9aa63a0950dae3ee90d2"} Jan 26 15:49:14 crc kubenswrapper[4713]: I0126 15:49:14.743186 4713 generic.go:334] "Generic (PLEG): container finished" podID="9323c729-cd29-40a2-9ed3-49844ca9e66c" containerID="ab12343c32bd4aed8c4024f9d1917a915cd31ae5d8d5641911e4ae8703916873" exitCode=0 Jan 26 15:49:14 crc kubenswrapper[4713]: I0126 15:49:14.743243 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" event={"ID":"9323c729-cd29-40a2-9ed3-49844ca9e66c","Type":"ContainerDied","Data":"ab12343c32bd4aed8c4024f9d1917a915cd31ae5d8d5641911e4ae8703916873"} Jan 26 15:49:15 crc kubenswrapper[4713]: I0126 15:49:15.754616 4713 generic.go:334] "Generic (PLEG): container finished" podID="9323c729-cd29-40a2-9ed3-49844ca9e66c" containerID="c014ca3279d5fb7eaf29c070a4483622b89394467b2ec94b3c6adc831d95a21b" exitCode=0 Jan 26 15:49:15 crc kubenswrapper[4713]: I0126 15:49:15.754662 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" event={"ID":"9323c729-cd29-40a2-9ed3-49844ca9e66c","Type":"ContainerDied","Data":"c014ca3279d5fb7eaf29c070a4483622b89394467b2ec94b3c6adc831d95a21b"} Jan 26 15:49:17 crc kubenswrapper[4713]: I0126 15:49:17.073884 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" Jan 26 15:49:17 crc kubenswrapper[4713]: I0126 15:49:17.193743 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9323c729-cd29-40a2-9ed3-49844ca9e66c-bundle\") pod \"9323c729-cd29-40a2-9ed3-49844ca9e66c\" (UID: \"9323c729-cd29-40a2-9ed3-49844ca9e66c\") " Jan 26 15:49:17 crc kubenswrapper[4713]: I0126 15:49:17.193820 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9323c729-cd29-40a2-9ed3-49844ca9e66c-util\") pod \"9323c729-cd29-40a2-9ed3-49844ca9e66c\" (UID: \"9323c729-cd29-40a2-9ed3-49844ca9e66c\") " Jan 26 15:49:17 crc kubenswrapper[4713]: I0126 15:49:17.193853 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mjqc\" (UniqueName: \"kubernetes.io/projected/9323c729-cd29-40a2-9ed3-49844ca9e66c-kube-api-access-2mjqc\") pod \"9323c729-cd29-40a2-9ed3-49844ca9e66c\" (UID: \"9323c729-cd29-40a2-9ed3-49844ca9e66c\") " Jan 26 15:49:17 crc kubenswrapper[4713]: I0126 15:49:17.447597 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9323c729-cd29-40a2-9ed3-49844ca9e66c-bundle" (OuterVolumeSpecName: "bundle") pod "9323c729-cd29-40a2-9ed3-49844ca9e66c" (UID: "9323c729-cd29-40a2-9ed3-49844ca9e66c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:49:17 crc kubenswrapper[4713]: I0126 15:49:17.457970 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9323c729-cd29-40a2-9ed3-49844ca9e66c-util" (OuterVolumeSpecName: "util") pod "9323c729-cd29-40a2-9ed3-49844ca9e66c" (UID: "9323c729-cd29-40a2-9ed3-49844ca9e66c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:49:17 crc kubenswrapper[4713]: I0126 15:49:17.468235 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9323c729-cd29-40a2-9ed3-49844ca9e66c-kube-api-access-2mjqc" (OuterVolumeSpecName: "kube-api-access-2mjqc") pod "9323c729-cd29-40a2-9ed3-49844ca9e66c" (UID: "9323c729-cd29-40a2-9ed3-49844ca9e66c"). InnerVolumeSpecName "kube-api-access-2mjqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:49:17 crc kubenswrapper[4713]: I0126 15:49:17.497169 4713 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9323c729-cd29-40a2-9ed3-49844ca9e66c-util\") on node \"crc\" DevicePath \"\"" Jan 26 15:49:17 crc kubenswrapper[4713]: I0126 15:49:17.497193 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mjqc\" (UniqueName: \"kubernetes.io/projected/9323c729-cd29-40a2-9ed3-49844ca9e66c-kube-api-access-2mjqc\") on node \"crc\" DevicePath \"\"" Jan 26 15:49:17 crc kubenswrapper[4713]: I0126 15:49:17.497202 4713 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9323c729-cd29-40a2-9ed3-49844ca9e66c-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:49:17 crc kubenswrapper[4713]: I0126 15:49:17.771481 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" event={"ID":"9323c729-cd29-40a2-9ed3-49844ca9e66c","Type":"ContainerDied","Data":"c4a2388295ac7ef6f18893186ec0823177b485d67660996df88b831a355e37e9"} Jan 26 15:49:17 crc kubenswrapper[4713]: I0126 15:49:17.771859 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4a2388295ac7ef6f18893186ec0823177b485d67660996df88b831a355e37e9" Jan 26 15:49:17 crc kubenswrapper[4713]: I0126 15:49:17.771542 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl" Jan 26 15:49:20 crc kubenswrapper[4713]: I0126 15:49:20.408694 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-s598k"] Jan 26 15:49:20 crc kubenswrapper[4713]: E0126 15:49:20.409184 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9323c729-cd29-40a2-9ed3-49844ca9e66c" containerName="pull" Jan 26 15:49:20 crc kubenswrapper[4713]: I0126 15:49:20.409194 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="9323c729-cd29-40a2-9ed3-49844ca9e66c" containerName="pull" Jan 26 15:49:20 crc kubenswrapper[4713]: E0126 15:49:20.409212 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9323c729-cd29-40a2-9ed3-49844ca9e66c" containerName="util" Jan 26 15:49:20 crc kubenswrapper[4713]: I0126 15:49:20.409218 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="9323c729-cd29-40a2-9ed3-49844ca9e66c" containerName="util" Jan 26 15:49:20 crc kubenswrapper[4713]: E0126 15:49:20.409230 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9323c729-cd29-40a2-9ed3-49844ca9e66c" containerName="extract" Jan 26 15:49:20 crc kubenswrapper[4713]: I0126 15:49:20.409237 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="9323c729-cd29-40a2-9ed3-49844ca9e66c" containerName="extract" Jan 26 15:49:20 crc kubenswrapper[4713]: I0126 15:49:20.409352 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="9323c729-cd29-40a2-9ed3-49844ca9e66c" containerName="extract" Jan 26 15:49:20 crc kubenswrapper[4713]: I0126 15:49:20.409873 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-s598k" Jan 26 15:49:20 crc kubenswrapper[4713]: I0126 15:49:20.415588 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 15:49:20 crc kubenswrapper[4713]: I0126 15:49:20.416064 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-rpks5" Jan 26 15:49:20 crc kubenswrapper[4713]: I0126 15:49:20.416406 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 15:49:20 crc kubenswrapper[4713]: I0126 15:49:20.424396 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-s598k"] Jan 26 15:49:20 crc kubenswrapper[4713]: I0126 15:49:20.534541 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r996x\" (UniqueName: \"kubernetes.io/projected/67b6fbcb-7c02-4dd2-9da0-b5d2fb39e94c-kube-api-access-r996x\") pod \"nmstate-operator-646758c888-s598k\" (UID: \"67b6fbcb-7c02-4dd2-9da0-b5d2fb39e94c\") " pod="openshift-nmstate/nmstate-operator-646758c888-s598k" Jan 26 15:49:20 crc kubenswrapper[4713]: I0126 15:49:20.635639 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r996x\" (UniqueName: \"kubernetes.io/projected/67b6fbcb-7c02-4dd2-9da0-b5d2fb39e94c-kube-api-access-r996x\") pod \"nmstate-operator-646758c888-s598k\" (UID: \"67b6fbcb-7c02-4dd2-9da0-b5d2fb39e94c\") " pod="openshift-nmstate/nmstate-operator-646758c888-s598k" Jan 26 15:49:20 crc kubenswrapper[4713]: I0126 15:49:20.658883 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r996x\" (UniqueName: \"kubernetes.io/projected/67b6fbcb-7c02-4dd2-9da0-b5d2fb39e94c-kube-api-access-r996x\") pod \"nmstate-operator-646758c888-s598k\" (UID: \"67b6fbcb-7c02-4dd2-9da0-b5d2fb39e94c\") " pod="openshift-nmstate/nmstate-operator-646758c888-s598k" Jan 26 15:49:20 crc kubenswrapper[4713]: I0126 15:49:20.725628 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-s598k" Jan 26 15:49:20 crc kubenswrapper[4713]: I0126 15:49:20.945112 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-s598k"] Jan 26 15:49:21 crc kubenswrapper[4713]: I0126 15:49:21.798055 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-s598k" event={"ID":"67b6fbcb-7c02-4dd2-9da0-b5d2fb39e94c","Type":"ContainerStarted","Data":"f7927f44135f397e8704a049056dd74921ce7d692129efb57546deccb723bf9e"} Jan 26 15:49:21 crc kubenswrapper[4713]: I0126 15:49:21.822272 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4s5rj"] Jan 26 15:49:21 crc kubenswrapper[4713]: I0126 15:49:21.824580 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:21 crc kubenswrapper[4713]: I0126 15:49:21.835871 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4s5rj"] Jan 26 15:49:21 crc kubenswrapper[4713]: I0126 15:49:21.953068 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf6dw\" (UniqueName: \"kubernetes.io/projected/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-kube-api-access-gf6dw\") pod \"certified-operators-4s5rj\" (UID: \"0acdf1f3-a701-46ab-85e0-ec5eeb966a72\") " pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:21 crc kubenswrapper[4713]: I0126 15:49:21.953316 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-utilities\") pod \"certified-operators-4s5rj\" (UID: \"0acdf1f3-a701-46ab-85e0-ec5eeb966a72\") " pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:21 crc kubenswrapper[4713]: I0126 15:49:21.953383 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-catalog-content\") pod \"certified-operators-4s5rj\" (UID: \"0acdf1f3-a701-46ab-85e0-ec5eeb966a72\") " pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:22 crc kubenswrapper[4713]: I0126 15:49:22.054602 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-utilities\") pod \"certified-operators-4s5rj\" (UID: \"0acdf1f3-a701-46ab-85e0-ec5eeb966a72\") " pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:22 crc kubenswrapper[4713]: I0126 15:49:22.054647 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-catalog-content\") pod \"certified-operators-4s5rj\" (UID: \"0acdf1f3-a701-46ab-85e0-ec5eeb966a72\") " pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:22 crc kubenswrapper[4713]: I0126 15:49:22.054696 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf6dw\" (UniqueName: \"kubernetes.io/projected/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-kube-api-access-gf6dw\") pod \"certified-operators-4s5rj\" (UID: \"0acdf1f3-a701-46ab-85e0-ec5eeb966a72\") " pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:22 crc kubenswrapper[4713]: I0126 15:49:22.055225 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-catalog-content\") pod \"certified-operators-4s5rj\" (UID: \"0acdf1f3-a701-46ab-85e0-ec5eeb966a72\") " pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:22 crc kubenswrapper[4713]: I0126 15:49:22.055236 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-utilities\") pod \"certified-operators-4s5rj\" (UID: \"0acdf1f3-a701-46ab-85e0-ec5eeb966a72\") " pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:22 crc kubenswrapper[4713]: I0126 15:49:22.075312 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf6dw\" (UniqueName: \"kubernetes.io/projected/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-kube-api-access-gf6dw\") pod \"certified-operators-4s5rj\" (UID: \"0acdf1f3-a701-46ab-85e0-ec5eeb966a72\") " pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:22 crc kubenswrapper[4713]: I0126 15:49:22.146616 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:22 crc kubenswrapper[4713]: I0126 15:49:22.473948 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4s5rj"] Jan 26 15:49:22 crc kubenswrapper[4713]: I0126 15:49:22.805690 4713 generic.go:334] "Generic (PLEG): container finished" podID="0acdf1f3-a701-46ab-85e0-ec5eeb966a72" containerID="43974df7094a853d98bda1d4c8556da8e34eb7c766c9e622055f62b234dbf87c" exitCode=0 Jan 26 15:49:22 crc kubenswrapper[4713]: I0126 15:49:22.805742 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5rj" event={"ID":"0acdf1f3-a701-46ab-85e0-ec5eeb966a72","Type":"ContainerDied","Data":"43974df7094a853d98bda1d4c8556da8e34eb7c766c9e622055f62b234dbf87c"} Jan 26 15:49:22 crc kubenswrapper[4713]: I0126 15:49:22.805771 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5rj" event={"ID":"0acdf1f3-a701-46ab-85e0-ec5eeb966a72","Type":"ContainerStarted","Data":"867dfa1cd810a0de70c6296ebd5f006e9a5332e89e8625d45de0fd9b71510ed0"} Jan 26 15:49:24 crc kubenswrapper[4713]: I0126 15:49:24.819757 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-s598k" event={"ID":"67b6fbcb-7c02-4dd2-9da0-b5d2fb39e94c","Type":"ContainerStarted","Data":"8f3c5c08ab55b6cee7ee140696499941b66755a4e1d18b9741ac1bd34e1f73cd"} Jan 26 15:49:24 crc kubenswrapper[4713]: I0126 15:49:24.821149 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5rj" event={"ID":"0acdf1f3-a701-46ab-85e0-ec5eeb966a72","Type":"ContainerStarted","Data":"c9948148579e9f9b968535a5293a1df0320b96a5544d74fdb6c739b8b2dd59fc"} Jan 26 15:49:24 crc kubenswrapper[4713]: I0126 15:49:24.897458 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-s598k" podStartSLOduration=1.613283772 podStartE2EDuration="4.897427348s" podCreationTimestamp="2026-01-26 15:49:20 +0000 UTC" firstStartedPulling="2026-01-26 15:49:20.95782104 +0000 UTC m=+936.094838275" lastFinishedPulling="2026-01-26 15:49:24.241964616 +0000 UTC m=+939.378981851" observedRunningTime="2026-01-26 15:49:24.897090188 +0000 UTC m=+940.034107423" watchObservedRunningTime="2026-01-26 15:49:24.897427348 +0000 UTC m=+940.034444583" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.118864 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-52jc4"] Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.120406 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-52jc4" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.127420 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j"] Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.128174 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.129385 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-kgcwf" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.133845 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-52jc4"] Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.135133 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.145054 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j"] Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.174050 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-k8t4l"] Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.175028 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.207213 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9z2w\" (UniqueName: \"kubernetes.io/projected/ad3354c5-b1d1-4473-99f1-0b1a9a4ded20-kube-api-access-k9z2w\") pod \"nmstate-handler-k8t4l\" (UID: \"ad3354c5-b1d1-4473-99f1-0b1a9a4ded20\") " pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.207533 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpx8t\" (UniqueName: \"kubernetes.io/projected/8f508dd8-5689-4e70-b252-5a4e6204bd4b-kube-api-access-gpx8t\") pod \"nmstate-webhook-8474b5b9d8-t929j\" (UID: \"8f508dd8-5689-4e70-b252-5a4e6204bd4b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.207655 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ad3354c5-b1d1-4473-99f1-0b1a9a4ded20-dbus-socket\") pod \"nmstate-handler-k8t4l\" (UID: \"ad3354c5-b1d1-4473-99f1-0b1a9a4ded20\") " pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.207758 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8f508dd8-5689-4e70-b252-5a4e6204bd4b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t929j\" (UID: \"8f508dd8-5689-4e70-b252-5a4e6204bd4b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.207876 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ad3354c5-b1d1-4473-99f1-0b1a9a4ded20-ovs-socket\") pod \"nmstate-handler-k8t4l\" (UID: \"ad3354c5-b1d1-4473-99f1-0b1a9a4ded20\") " pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.207990 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ad3354c5-b1d1-4473-99f1-0b1a9a4ded20-nmstate-lock\") pod \"nmstate-handler-k8t4l\" (UID: \"ad3354c5-b1d1-4473-99f1-0b1a9a4ded20\") " pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.208115 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcrhd\" (UniqueName: \"kubernetes.io/projected/e4ccf912-a778-4d91-84d9-bbfb4f83c221-kube-api-access-lcrhd\") pod \"nmstate-metrics-54757c584b-52jc4\" (UID: \"e4ccf912-a778-4d91-84d9-bbfb4f83c221\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-52jc4" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.309151 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9z2w\" (UniqueName: \"kubernetes.io/projected/ad3354c5-b1d1-4473-99f1-0b1a9a4ded20-kube-api-access-k9z2w\") pod \"nmstate-handler-k8t4l\" (UID: \"ad3354c5-b1d1-4473-99f1-0b1a9a4ded20\") " pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.309208 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpx8t\" (UniqueName: \"kubernetes.io/projected/8f508dd8-5689-4e70-b252-5a4e6204bd4b-kube-api-access-gpx8t\") pod \"nmstate-webhook-8474b5b9d8-t929j\" (UID: \"8f508dd8-5689-4e70-b252-5a4e6204bd4b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.309231 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ad3354c5-b1d1-4473-99f1-0b1a9a4ded20-dbus-socket\") pod \"nmstate-handler-k8t4l\" (UID: \"ad3354c5-b1d1-4473-99f1-0b1a9a4ded20\") " pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.309263 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8f508dd8-5689-4e70-b252-5a4e6204bd4b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t929j\" (UID: \"8f508dd8-5689-4e70-b252-5a4e6204bd4b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.309297 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ad3354c5-b1d1-4473-99f1-0b1a9a4ded20-ovs-socket\") pod \"nmstate-handler-k8t4l\" (UID: \"ad3354c5-b1d1-4473-99f1-0b1a9a4ded20\") " pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.309322 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ad3354c5-b1d1-4473-99f1-0b1a9a4ded20-nmstate-lock\") pod \"nmstate-handler-k8t4l\" (UID: \"ad3354c5-b1d1-4473-99f1-0b1a9a4ded20\") " pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.309379 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcrhd\" (UniqueName: \"kubernetes.io/projected/e4ccf912-a778-4d91-84d9-bbfb4f83c221-kube-api-access-lcrhd\") pod \"nmstate-metrics-54757c584b-52jc4\" (UID: \"e4ccf912-a778-4d91-84d9-bbfb4f83c221\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-52jc4" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.310345 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ad3354c5-b1d1-4473-99f1-0b1a9a4ded20-dbus-socket\") pod \"nmstate-handler-k8t4l\" (UID: \"ad3354c5-b1d1-4473-99f1-0b1a9a4ded20\") " pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:26 crc kubenswrapper[4713]: E0126 15:49:26.310453 4713 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 26 15:49:26 crc kubenswrapper[4713]: E0126 15:49:26.310511 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f508dd8-5689-4e70-b252-5a4e6204bd4b-tls-key-pair podName:8f508dd8-5689-4e70-b252-5a4e6204bd4b nodeName:}" failed. No retries permitted until 2026-01-26 15:49:26.810490605 +0000 UTC m=+941.947507840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/8f508dd8-5689-4e70-b252-5a4e6204bd4b-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-t929j" (UID: "8f508dd8-5689-4e70-b252-5a4e6204bd4b") : secret "openshift-nmstate-webhook" not found Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.310697 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ad3354c5-b1d1-4473-99f1-0b1a9a4ded20-ovs-socket\") pod \"nmstate-handler-k8t4l\" (UID: \"ad3354c5-b1d1-4473-99f1-0b1a9a4ded20\") " pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.310734 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ad3354c5-b1d1-4473-99f1-0b1a9a4ded20-nmstate-lock\") pod \"nmstate-handler-k8t4l\" (UID: \"ad3354c5-b1d1-4473-99f1-0b1a9a4ded20\") " pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.315590 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm"] Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.319198 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.322566 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.322933 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2kf27" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.323123 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.332662 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm"] Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.348033 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9z2w\" (UniqueName: \"kubernetes.io/projected/ad3354c5-b1d1-4473-99f1-0b1a9a4ded20-kube-api-access-k9z2w\") pod \"nmstate-handler-k8t4l\" (UID: \"ad3354c5-b1d1-4473-99f1-0b1a9a4ded20\") " pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.356404 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpx8t\" (UniqueName: \"kubernetes.io/projected/8f508dd8-5689-4e70-b252-5a4e6204bd4b-kube-api-access-gpx8t\") pod \"nmstate-webhook-8474b5b9d8-t929j\" (UID: \"8f508dd8-5689-4e70-b252-5a4e6204bd4b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.365865 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcrhd\" (UniqueName: \"kubernetes.io/projected/e4ccf912-a778-4d91-84d9-bbfb4f83c221-kube-api-access-lcrhd\") pod \"nmstate-metrics-54757c584b-52jc4\" (UID: \"e4ccf912-a778-4d91-84d9-bbfb4f83c221\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-52jc4" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.409997 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a6fe59c9-c3b5-407e-9d75-9e7f98d4142d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7bzvm\" (UID: \"a6fe59c9-c3b5-407e-9d75-9e7f98d4142d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.410074 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrg6t\" (UniqueName: \"kubernetes.io/projected/a6fe59c9-c3b5-407e-9d75-9e7f98d4142d-kube-api-access-jrg6t\") pod \"nmstate-console-plugin-7754f76f8b-7bzvm\" (UID: \"a6fe59c9-c3b5-407e-9d75-9e7f98d4142d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.410156 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fe59c9-c3b5-407e-9d75-9e7f98d4142d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7bzvm\" (UID: \"a6fe59c9-c3b5-407e-9d75-9e7f98d4142d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.437951 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-52jc4" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.489657 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.510803 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a6fe59c9-c3b5-407e-9d75-9e7f98d4142d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7bzvm\" (UID: \"a6fe59c9-c3b5-407e-9d75-9e7f98d4142d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.510895 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrg6t\" (UniqueName: \"kubernetes.io/projected/a6fe59c9-c3b5-407e-9d75-9e7f98d4142d-kube-api-access-jrg6t\") pod \"nmstate-console-plugin-7754f76f8b-7bzvm\" (UID: \"a6fe59c9-c3b5-407e-9d75-9e7f98d4142d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.510969 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fe59c9-c3b5-407e-9d75-9e7f98d4142d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7bzvm\" (UID: \"a6fe59c9-c3b5-407e-9d75-9e7f98d4142d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.512201 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a6fe59c9-c3b5-407e-9d75-9e7f98d4142d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7bzvm\" (UID: \"a6fe59c9-c3b5-407e-9d75-9e7f98d4142d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.516303 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fe59c9-c3b5-407e-9d75-9e7f98d4142d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7bzvm\" (UID: \"a6fe59c9-c3b5-407e-9d75-9e7f98d4142d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.528686 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-c6c78d75f-ltwjm"] Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.529635 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.536620 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrg6t\" (UniqueName: \"kubernetes.io/projected/a6fe59c9-c3b5-407e-9d75-9e7f98d4142d-kube-api-access-jrg6t\") pod \"nmstate-console-plugin-7754f76f8b-7bzvm\" (UID: \"a6fe59c9-c3b5-407e-9d75-9e7f98d4142d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.543734 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c6c78d75f-ltwjm"] Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.612421 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/52a083d8-82ae-4594-afc1-130135009e2b-service-ca\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.612481 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52a083d8-82ae-4594-afc1-130135009e2b-trusted-ca-bundle\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.612510 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/52a083d8-82ae-4594-afc1-130135009e2b-console-serving-cert\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.612534 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf88g\" (UniqueName: \"kubernetes.io/projected/52a083d8-82ae-4594-afc1-130135009e2b-kube-api-access-jf88g\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.612568 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/52a083d8-82ae-4594-afc1-130135009e2b-console-config\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.612618 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/52a083d8-82ae-4594-afc1-130135009e2b-console-oauth-config\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.612640 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/52a083d8-82ae-4594-afc1-130135009e2b-oauth-serving-cert\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.641792 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.713712 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/52a083d8-82ae-4594-afc1-130135009e2b-service-ca\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.713749 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52a083d8-82ae-4594-afc1-130135009e2b-trusted-ca-bundle\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.713772 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/52a083d8-82ae-4594-afc1-130135009e2b-console-serving-cert\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.713788 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf88g\" (UniqueName: \"kubernetes.io/projected/52a083d8-82ae-4594-afc1-130135009e2b-kube-api-access-jf88g\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.713813 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/52a083d8-82ae-4594-afc1-130135009e2b-console-config\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.713847 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/52a083d8-82ae-4594-afc1-130135009e2b-console-oauth-config\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.713861 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/52a083d8-82ae-4594-afc1-130135009e2b-oauth-serving-cert\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.722537 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/52a083d8-82ae-4594-afc1-130135009e2b-oauth-serving-cert\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.729196 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/52a083d8-82ae-4594-afc1-130135009e2b-service-ca\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.733290 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/52a083d8-82ae-4594-afc1-130135009e2b-console-config\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.733716 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52a083d8-82ae-4594-afc1-130135009e2b-trusted-ca-bundle\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.736632 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/52a083d8-82ae-4594-afc1-130135009e2b-console-serving-cert\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.736848 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/52a083d8-82ae-4594-afc1-130135009e2b-console-oauth-config\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.736976 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf88g\" (UniqueName: \"kubernetes.io/projected/52a083d8-82ae-4594-afc1-130135009e2b-kube-api-access-jf88g\") pod \"console-c6c78d75f-ltwjm\" (UID: \"52a083d8-82ae-4594-afc1-130135009e2b\") " pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.814940 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8f508dd8-5689-4e70-b252-5a4e6204bd4b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t929j\" (UID: \"8f508dd8-5689-4e70-b252-5a4e6204bd4b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.818720 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8f508dd8-5689-4e70-b252-5a4e6204bd4b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t929j\" (UID: \"8f508dd8-5689-4e70-b252-5a4e6204bd4b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.832489 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-52jc4"] Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.840202 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-k8t4l" event={"ID":"ad3354c5-b1d1-4473-99f1-0b1a9a4ded20","Type":"ContainerStarted","Data":"d68662bbe5eba02580167b67953c2fbf803186982129adef8194e0642f1510ee"} Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.842256 4713 generic.go:334] "Generic (PLEG): container finished" podID="0acdf1f3-a701-46ab-85e0-ec5eeb966a72" containerID="c9948148579e9f9b968535a5293a1df0320b96a5544d74fdb6c739b8b2dd59fc" exitCode=0 Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.842294 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5rj" event={"ID":"0acdf1f3-a701-46ab-85e0-ec5eeb966a72","Type":"ContainerDied","Data":"c9948148579e9f9b968535a5293a1df0320b96a5544d74fdb6c739b8b2dd59fc"} Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.861677 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:26 crc kubenswrapper[4713]: I0126 15:49:26.988957 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm"] Jan 26 15:49:27 crc kubenswrapper[4713]: I0126 15:49:27.045003 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j" Jan 26 15:49:27 crc kubenswrapper[4713]: I0126 15:49:27.162296 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c6c78d75f-ltwjm"] Jan 26 15:49:27 crc kubenswrapper[4713]: W0126 15:49:27.165446 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52a083d8_82ae_4594_afc1_130135009e2b.slice/crio-2edeca0dae1b72f80412095fdffdd2d0fe054c3ed7be6281b57421993464ee69 WatchSource:0}: Error finding container 2edeca0dae1b72f80412095fdffdd2d0fe054c3ed7be6281b57421993464ee69: Status 404 returned error can't find the container with id 2edeca0dae1b72f80412095fdffdd2d0fe054c3ed7be6281b57421993464ee69 Jan 26 15:49:27 crc kubenswrapper[4713]: I0126 15:49:27.251379 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j"] Jan 26 15:49:27 crc kubenswrapper[4713]: W0126 15:49:27.266893 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f508dd8_5689_4e70_b252_5a4e6204bd4b.slice/crio-674763df81c09a6722dcccb600190611ce060b30fca42c8b97c66c482ae495d3 WatchSource:0}: Error finding container 674763df81c09a6722dcccb600190611ce060b30fca42c8b97c66c482ae495d3: Status 404 returned error can't find the container with id 674763df81c09a6722dcccb600190611ce060b30fca42c8b97c66c482ae495d3 Jan 26 15:49:27 crc kubenswrapper[4713]: I0126 15:49:27.862290 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-52jc4" event={"ID":"e4ccf912-a778-4d91-84d9-bbfb4f83c221","Type":"ContainerStarted","Data":"85d93c65de67044ec72dd18aa48485ffb0618e5cbd56df086feabeb5e88f10f1"} Jan 26 15:49:27 crc kubenswrapper[4713]: I0126 15:49:27.882606 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5rj" event={"ID":"0acdf1f3-a701-46ab-85e0-ec5eeb966a72","Type":"ContainerStarted","Data":"6538125e4283e52d8f4f404d270b7f11b1645a7e8ec7d494705ee0cb2d859092"} Jan 26 15:49:27 crc kubenswrapper[4713]: I0126 15:49:27.886634 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c6c78d75f-ltwjm" event={"ID":"52a083d8-82ae-4594-afc1-130135009e2b","Type":"ContainerStarted","Data":"3e6a2daf94d0060548c2dc87f32745d53c7221dcf634699c041c45da063972f8"} Jan 26 15:49:27 crc kubenswrapper[4713]: I0126 15:49:27.886681 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c6c78d75f-ltwjm" event={"ID":"52a083d8-82ae-4594-afc1-130135009e2b","Type":"ContainerStarted","Data":"2edeca0dae1b72f80412095fdffdd2d0fe054c3ed7be6281b57421993464ee69"} Jan 26 15:49:27 crc kubenswrapper[4713]: I0126 15:49:27.888039 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm" event={"ID":"a6fe59c9-c3b5-407e-9d75-9e7f98d4142d","Type":"ContainerStarted","Data":"c029d4479642815dc0e9cf3337ddfac625a0eeacee00bdb57e458e9466ccba30"} Jan 26 15:49:27 crc kubenswrapper[4713]: I0126 15:49:27.891511 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j" event={"ID":"8f508dd8-5689-4e70-b252-5a4e6204bd4b","Type":"ContainerStarted","Data":"674763df81c09a6722dcccb600190611ce060b30fca42c8b97c66c482ae495d3"} Jan 26 15:49:27 crc kubenswrapper[4713]: I0126 15:49:27.914151 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4s5rj" podStartSLOduration=2.39018992 podStartE2EDuration="6.914127891s" podCreationTimestamp="2026-01-26 15:49:21 +0000 UTC" firstStartedPulling="2026-01-26 15:49:22.807310976 +0000 UTC m=+937.944328211" lastFinishedPulling="2026-01-26 15:49:27.331248947 +0000 UTC m=+942.468266182" observedRunningTime="2026-01-26 15:49:27.912990788 +0000 UTC m=+943.050008023" watchObservedRunningTime="2026-01-26 15:49:27.914127891 +0000 UTC m=+943.051145126" Jan 26 15:49:27 crc kubenswrapper[4713]: I0126 15:49:27.939216 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-c6c78d75f-ltwjm" podStartSLOduration=1.939196135 podStartE2EDuration="1.939196135s" podCreationTimestamp="2026-01-26 15:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:49:27.935728475 +0000 UTC m=+943.072745720" watchObservedRunningTime="2026-01-26 15:49:27.939196135 +0000 UTC m=+943.076213370" Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.147662 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.147994 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.189746 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n5dpf"] Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.191496 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.201299 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5dpf"] Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.210716 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.269129 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d48wt\" (UniqueName: \"kubernetes.io/projected/f4b6e0b7-905f-486d-9b41-26e8bc059e58-kube-api-access-d48wt\") pod \"redhat-marketplace-n5dpf\" (UID: \"f4b6e0b7-905f-486d-9b41-26e8bc059e58\") " pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.269218 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b6e0b7-905f-486d-9b41-26e8bc059e58-catalog-content\") pod \"redhat-marketplace-n5dpf\" (UID: \"f4b6e0b7-905f-486d-9b41-26e8bc059e58\") " pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.269264 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b6e0b7-905f-486d-9b41-26e8bc059e58-utilities\") pod \"redhat-marketplace-n5dpf\" (UID: \"f4b6e0b7-905f-486d-9b41-26e8bc059e58\") " pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.370974 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b6e0b7-905f-486d-9b41-26e8bc059e58-catalog-content\") pod \"redhat-marketplace-n5dpf\" (UID: \"f4b6e0b7-905f-486d-9b41-26e8bc059e58\") " pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.371078 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b6e0b7-905f-486d-9b41-26e8bc059e58-utilities\") pod \"redhat-marketplace-n5dpf\" (UID: \"f4b6e0b7-905f-486d-9b41-26e8bc059e58\") " pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.371103 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d48wt\" (UniqueName: \"kubernetes.io/projected/f4b6e0b7-905f-486d-9b41-26e8bc059e58-kube-api-access-d48wt\") pod \"redhat-marketplace-n5dpf\" (UID: \"f4b6e0b7-905f-486d-9b41-26e8bc059e58\") " pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.372323 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b6e0b7-905f-486d-9b41-26e8bc059e58-catalog-content\") pod \"redhat-marketplace-n5dpf\" (UID: \"f4b6e0b7-905f-486d-9b41-26e8bc059e58\") " pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.372568 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b6e0b7-905f-486d-9b41-26e8bc059e58-utilities\") pod \"redhat-marketplace-n5dpf\" (UID: \"f4b6e0b7-905f-486d-9b41-26e8bc059e58\") " pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.480490 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d48wt\" (UniqueName: \"kubernetes.io/projected/f4b6e0b7-905f-486d-9b41-26e8bc059e58-kube-api-access-d48wt\") pod \"redhat-marketplace-n5dpf\" (UID: \"f4b6e0b7-905f-486d-9b41-26e8bc059e58\") " pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:32 crc kubenswrapper[4713]: I0126 15:49:32.527020 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:33 crc kubenswrapper[4713]: I0126 15:49:33.005619 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:34 crc kubenswrapper[4713]: I0126 15:49:34.754948 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4s5rj"] Jan 26 15:49:35 crc kubenswrapper[4713]: I0126 15:49:35.161891 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4s5rj" podUID="0acdf1f3-a701-46ab-85e0-ec5eeb966a72" containerName="registry-server" containerID="cri-o://6538125e4283e52d8f4f404d270b7f11b1645a7e8ec7d494705ee0cb2d859092" gracePeriod=2 Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.173886 4713 generic.go:334] "Generic (PLEG): container finished" podID="0acdf1f3-a701-46ab-85e0-ec5eeb966a72" containerID="6538125e4283e52d8f4f404d270b7f11b1645a7e8ec7d494705ee0cb2d859092" exitCode=0 Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.174161 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5rj" event={"ID":"0acdf1f3-a701-46ab-85e0-ec5eeb966a72","Type":"ContainerDied","Data":"6538125e4283e52d8f4f404d270b7f11b1645a7e8ec7d494705ee0cb2d859092"} Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.424637 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.581980 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-catalog-content\") pod \"0acdf1f3-a701-46ab-85e0-ec5eeb966a72\" (UID: \"0acdf1f3-a701-46ab-85e0-ec5eeb966a72\") " Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.582039 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-utilities\") pod \"0acdf1f3-a701-46ab-85e0-ec5eeb966a72\" (UID: \"0acdf1f3-a701-46ab-85e0-ec5eeb966a72\") " Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.582149 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf6dw\" (UniqueName: \"kubernetes.io/projected/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-kube-api-access-gf6dw\") pod \"0acdf1f3-a701-46ab-85e0-ec5eeb966a72\" (UID: \"0acdf1f3-a701-46ab-85e0-ec5eeb966a72\") " Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.584910 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-utilities" (OuterVolumeSpecName: "utilities") pod "0acdf1f3-a701-46ab-85e0-ec5eeb966a72" (UID: "0acdf1f3-a701-46ab-85e0-ec5eeb966a72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.587419 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-kube-api-access-gf6dw" (OuterVolumeSpecName: "kube-api-access-gf6dw") pod "0acdf1f3-a701-46ab-85e0-ec5eeb966a72" (UID: "0acdf1f3-a701-46ab-85e0-ec5eeb966a72"). InnerVolumeSpecName "kube-api-access-gf6dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.588457 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5dpf"] Jan 26 15:49:36 crc kubenswrapper[4713]: W0126 15:49:36.599550 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b6e0b7_905f_486d_9b41_26e8bc059e58.slice/crio-976f103d6a1b2fff6c41a3617c563b08b4f34fb03ef0301ebe7be2a0a24229a5 WatchSource:0}: Error finding container 976f103d6a1b2fff6c41a3617c563b08b4f34fb03ef0301ebe7be2a0a24229a5: Status 404 returned error can't find the container with id 976f103d6a1b2fff6c41a3617c563b08b4f34fb03ef0301ebe7be2a0a24229a5 Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.644414 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0acdf1f3-a701-46ab-85e0-ec5eeb966a72" (UID: "0acdf1f3-a701-46ab-85e0-ec5eeb966a72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.684258 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf6dw\" (UniqueName: \"kubernetes.io/projected/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-kube-api-access-gf6dw\") on node \"crc\" DevicePath \"\"" Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.684304 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.684322 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0acdf1f3-a701-46ab-85e0-ec5eeb966a72-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.862271 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.862856 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:36 crc kubenswrapper[4713]: I0126 15:49:36.867484 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.198606 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j" event={"ID":"8f508dd8-5689-4e70-b252-5a4e6204bd4b","Type":"ContainerStarted","Data":"7b8186194e35712feaffe08aa9906a9dd3fbcb4bde6d8054f8570d23c5107650"} Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.198810 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j" Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.208298 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-k8t4l" event={"ID":"ad3354c5-b1d1-4473-99f1-0b1a9a4ded20","Type":"ContainerStarted","Data":"98330406b5a0f6a1b3f6f8d3494cf970e4d9f55167b9c89d93df80d2e1cf4807"} Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.208810 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.210924 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-52jc4" event={"ID":"e4ccf912-a778-4d91-84d9-bbfb4f83c221","Type":"ContainerStarted","Data":"1d9880ac8d14a4289243421bc7613090466d2cee1e291a98ee5fb9f7331a133f"} Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.213911 4713 generic.go:334] "Generic (PLEG): container finished" podID="f4b6e0b7-905f-486d-9b41-26e8bc059e58" containerID="a4748f2b63190ca02b97ecd68452a6819f7018f7b74c00da3e2fe567decc1c66" exitCode=0 Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.214033 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5dpf" event={"ID":"f4b6e0b7-905f-486d-9b41-26e8bc059e58","Type":"ContainerDied","Data":"a4748f2b63190ca02b97ecd68452a6819f7018f7b74c00da3e2fe567decc1c66"} Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.214080 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5dpf" event={"ID":"f4b6e0b7-905f-486d-9b41-26e8bc059e58","Type":"ContainerStarted","Data":"976f103d6a1b2fff6c41a3617c563b08b4f34fb03ef0301ebe7be2a0a24229a5"} Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.235829 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j" podStartSLOduration=1.957271628 podStartE2EDuration="11.235778931s" podCreationTimestamp="2026-01-26 15:49:26 +0000 UTC" firstStartedPulling="2026-01-26 15:49:27.269152152 +0000 UTC m=+942.406169397" lastFinishedPulling="2026-01-26 15:49:36.547659465 +0000 UTC m=+951.684676700" observedRunningTime="2026-01-26 15:49:37.219173981 +0000 UTC m=+952.356191216" watchObservedRunningTime="2026-01-26 15:49:37.235778931 +0000 UTC m=+952.372796196" Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.237811 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5rj" event={"ID":"0acdf1f3-a701-46ab-85e0-ec5eeb966a72","Type":"ContainerDied","Data":"867dfa1cd810a0de70c6296ebd5f006e9a5332e89e8625d45de0fd9b71510ed0"} Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.237874 4713 scope.go:117] "RemoveContainer" containerID="6538125e4283e52d8f4f404d270b7f11b1645a7e8ec7d494705ee0cb2d859092" Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.237882 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s5rj" Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.240087 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm" event={"ID":"a6fe59c9-c3b5-407e-9d75-9e7f98d4142d","Type":"ContainerStarted","Data":"22f8e90b41b72dbf0677e527ddd2b4006893725abd06d3a7c02ab3109c310ce8"} Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.252955 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-c6c78d75f-ltwjm" Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.254271 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-k8t4l" podStartSLOduration=1.241431051 podStartE2EDuration="11.254252505s" podCreationTimestamp="2026-01-26 15:49:26 +0000 UTC" firstStartedPulling="2026-01-26 15:49:26.535533421 +0000 UTC m=+941.672550656" lastFinishedPulling="2026-01-26 15:49:36.548354875 +0000 UTC m=+951.685372110" observedRunningTime="2026-01-26 15:49:37.248388926 +0000 UTC m=+952.385406161" watchObservedRunningTime="2026-01-26 15:49:37.254252505 +0000 UTC m=+952.391269740" Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.277784 4713 scope.go:117] "RemoveContainer" containerID="c9948148579e9f9b968535a5293a1df0320b96a5544d74fdb6c739b8b2dd59fc" Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.299589 4713 scope.go:117] "RemoveContainer" containerID="43974df7094a853d98bda1d4c8556da8e34eb7c766c9e622055f62b234dbf87c" Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.301015 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4s5rj"] Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.305859 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4s5rj"] Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.307096 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7bzvm" podStartSLOduration=1.7608577090000002 podStartE2EDuration="11.307077003s" podCreationTimestamp="2026-01-26 15:49:26 +0000 UTC" firstStartedPulling="2026-01-26 15:49:27.001424011 +0000 UTC m=+942.138441246" lastFinishedPulling="2026-01-26 15:49:36.547643305 +0000 UTC m=+951.684660540" observedRunningTime="2026-01-26 15:49:37.302593203 +0000 UTC m=+952.439610438" watchObservedRunningTime="2026-01-26 15:49:37.307077003 +0000 UTC m=+952.444094238" Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.361420 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-p5wsk"] Jan 26 15:49:37 crc kubenswrapper[4713]: I0126 15:49:37.811471 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0acdf1f3-a701-46ab-85e0-ec5eeb966a72" path="/var/lib/kubelet/pods/0acdf1f3-a701-46ab-85e0-ec5eeb966a72/volumes" Jan 26 15:49:40 crc kubenswrapper[4713]: I0126 15:49:40.267093 4713 generic.go:334] "Generic (PLEG): container finished" podID="f4b6e0b7-905f-486d-9b41-26e8bc059e58" containerID="011a96af7b816239ae19d53d19547912b9a9eb3b581bea28abe703cbc8acb5f6" exitCode=0 Jan 26 15:49:40 crc kubenswrapper[4713]: I0126 15:49:40.267142 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5dpf" event={"ID":"f4b6e0b7-905f-486d-9b41-26e8bc059e58","Type":"ContainerDied","Data":"011a96af7b816239ae19d53d19547912b9a9eb3b581bea28abe703cbc8acb5f6"} Jan 26 15:49:41 crc kubenswrapper[4713]: I0126 15:49:41.273727 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-52jc4" event={"ID":"e4ccf912-a778-4d91-84d9-bbfb4f83c221","Type":"ContainerStarted","Data":"83dfa5915f25fa4bf95c20d75f7052ffb8b0e33fa93b68eae46a66f8fbd01ac8"} Jan 26 15:49:41 crc kubenswrapper[4713]: I0126 15:49:41.276078 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5dpf" event={"ID":"f4b6e0b7-905f-486d-9b41-26e8bc059e58","Type":"ContainerStarted","Data":"b596cd498050c82ed23197ebecf7a522b55f2374804d62ad65eea944cffa1003"} Jan 26 15:49:41 crc kubenswrapper[4713]: I0126 15:49:41.295653 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-52jc4" podStartSLOduration=1.9176524609999999 podStartE2EDuration="15.295637394s" podCreationTimestamp="2026-01-26 15:49:26 +0000 UTC" firstStartedPulling="2026-01-26 15:49:26.840776466 +0000 UTC m=+941.977793701" lastFinishedPulling="2026-01-26 15:49:40.218761399 +0000 UTC m=+955.355778634" observedRunningTime="2026-01-26 15:49:41.291586637 +0000 UTC m=+956.428603892" watchObservedRunningTime="2026-01-26 15:49:41.295637394 +0000 UTC m=+956.432654629" Jan 26 15:49:41 crc kubenswrapper[4713]: I0126 15:49:41.314992 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n5dpf" podStartSLOduration=5.832269247 podStartE2EDuration="9.314976394s" podCreationTimestamp="2026-01-26 15:49:32 +0000 UTC" firstStartedPulling="2026-01-26 15:49:37.224043102 +0000 UTC m=+952.361060337" lastFinishedPulling="2026-01-26 15:49:40.706750249 +0000 UTC m=+955.843767484" observedRunningTime="2026-01-26 15:49:41.314788928 +0000 UTC m=+956.451806163" watchObservedRunningTime="2026-01-26 15:49:41.314976394 +0000 UTC m=+956.451993629" Jan 26 15:49:41 crc kubenswrapper[4713]: I0126 15:49:41.512623 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-k8t4l" Jan 26 15:49:42 crc kubenswrapper[4713]: I0126 15:49:42.527159 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:42 crc kubenswrapper[4713]: I0126 15:49:42.527531 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:42 crc kubenswrapper[4713]: I0126 15:49:42.569518 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:47 crc kubenswrapper[4713]: I0126 15:49:47.051265 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t929j" Jan 26 15:49:52 crc kubenswrapper[4713]: I0126 15:49:52.576212 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:52 crc kubenswrapper[4713]: I0126 15:49:52.631333 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5dpf"] Jan 26 15:49:53 crc kubenswrapper[4713]: I0126 15:49:53.348828 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n5dpf" podUID="f4b6e0b7-905f-486d-9b41-26e8bc059e58" containerName="registry-server" containerID="cri-o://b596cd498050c82ed23197ebecf7a522b55f2374804d62ad65eea944cffa1003" gracePeriod=2 Jan 26 15:49:54 crc kubenswrapper[4713]: I0126 15:49:54.392822 4713 generic.go:334] "Generic (PLEG): container finished" podID="f4b6e0b7-905f-486d-9b41-26e8bc059e58" containerID="b596cd498050c82ed23197ebecf7a522b55f2374804d62ad65eea944cffa1003" exitCode=0 Jan 26 15:49:54 crc kubenswrapper[4713]: I0126 15:49:54.392906 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5dpf" event={"ID":"f4b6e0b7-905f-486d-9b41-26e8bc059e58","Type":"ContainerDied","Data":"b596cd498050c82ed23197ebecf7a522b55f2374804d62ad65eea944cffa1003"} Jan 26 15:49:54 crc kubenswrapper[4713]: I0126 15:49:54.577378 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:54 crc kubenswrapper[4713]: I0126 15:49:54.643220 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b6e0b7-905f-486d-9b41-26e8bc059e58-utilities\") pod \"f4b6e0b7-905f-486d-9b41-26e8bc059e58\" (UID: \"f4b6e0b7-905f-486d-9b41-26e8bc059e58\") " Jan 26 15:49:54 crc kubenswrapper[4713]: I0126 15:49:54.643303 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b6e0b7-905f-486d-9b41-26e8bc059e58-catalog-content\") pod \"f4b6e0b7-905f-486d-9b41-26e8bc059e58\" (UID: \"f4b6e0b7-905f-486d-9b41-26e8bc059e58\") " Jan 26 15:49:54 crc kubenswrapper[4713]: I0126 15:49:54.643464 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d48wt\" (UniqueName: \"kubernetes.io/projected/f4b6e0b7-905f-486d-9b41-26e8bc059e58-kube-api-access-d48wt\") pod \"f4b6e0b7-905f-486d-9b41-26e8bc059e58\" (UID: \"f4b6e0b7-905f-486d-9b41-26e8bc059e58\") " Jan 26 15:49:54 crc kubenswrapper[4713]: I0126 15:49:54.644318 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4b6e0b7-905f-486d-9b41-26e8bc059e58-utilities" (OuterVolumeSpecName: "utilities") pod "f4b6e0b7-905f-486d-9b41-26e8bc059e58" (UID: "f4b6e0b7-905f-486d-9b41-26e8bc059e58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:49:54 crc kubenswrapper[4713]: I0126 15:49:54.660656 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b6e0b7-905f-486d-9b41-26e8bc059e58-kube-api-access-d48wt" (OuterVolumeSpecName: "kube-api-access-d48wt") pod "f4b6e0b7-905f-486d-9b41-26e8bc059e58" (UID: "f4b6e0b7-905f-486d-9b41-26e8bc059e58"). InnerVolumeSpecName "kube-api-access-d48wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:49:54 crc kubenswrapper[4713]: I0126 15:49:54.672252 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4b6e0b7-905f-486d-9b41-26e8bc059e58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4b6e0b7-905f-486d-9b41-26e8bc059e58" (UID: "f4b6e0b7-905f-486d-9b41-26e8bc059e58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:49:54 crc kubenswrapper[4713]: I0126 15:49:54.745150 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d48wt\" (UniqueName: \"kubernetes.io/projected/f4b6e0b7-905f-486d-9b41-26e8bc059e58-kube-api-access-d48wt\") on node \"crc\" DevicePath \"\"" Jan 26 15:49:54 crc kubenswrapper[4713]: I0126 15:49:54.745193 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b6e0b7-905f-486d-9b41-26e8bc059e58-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:49:54 crc kubenswrapper[4713]: I0126 15:49:54.745204 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b6e0b7-905f-486d-9b41-26e8bc059e58-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:49:55 crc kubenswrapper[4713]: I0126 15:49:55.404505 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5dpf" event={"ID":"f4b6e0b7-905f-486d-9b41-26e8bc059e58","Type":"ContainerDied","Data":"976f103d6a1b2fff6c41a3617c563b08b4f34fb03ef0301ebe7be2a0a24229a5"} Jan 26 15:49:55 crc kubenswrapper[4713]: I0126 15:49:55.404704 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5dpf" Jan 26 15:49:55 crc kubenswrapper[4713]: I0126 15:49:55.404843 4713 scope.go:117] "RemoveContainer" containerID="b596cd498050c82ed23197ebecf7a522b55f2374804d62ad65eea944cffa1003" Jan 26 15:49:55 crc kubenswrapper[4713]: I0126 15:49:55.444113 4713 scope.go:117] "RemoveContainer" containerID="011a96af7b816239ae19d53d19547912b9a9eb3b581bea28abe703cbc8acb5f6" Jan 26 15:49:55 crc kubenswrapper[4713]: I0126 15:49:55.448564 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5dpf"] Jan 26 15:49:55 crc kubenswrapper[4713]: I0126 15:49:55.453313 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5dpf"] Jan 26 15:49:55 crc kubenswrapper[4713]: I0126 15:49:55.464734 4713 scope.go:117] "RemoveContainer" containerID="a4748f2b63190ca02b97ecd68452a6819f7018f7b74c00da3e2fe567decc1c66" Jan 26 15:49:55 crc kubenswrapper[4713]: I0126 15:49:55.815957 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b6e0b7-905f-486d-9b41-26e8bc059e58" path="/var/lib/kubelet/pods/f4b6e0b7-905f-486d-9b41-26e8bc059e58/volumes" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.436085 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-p5wsk" podUID="adaaafc1-19f7-4240-bf6b-9c5c8adfa632" containerName="console" containerID="cri-o://6b70c1ae4e3e388995ced7861385778517dee1d02db82241b95855910aa86f20" gracePeriod=15 Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.702717 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp"] Jan 26 15:50:02 crc kubenswrapper[4713]: E0126 15:50:02.703324 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acdf1f3-a701-46ab-85e0-ec5eeb966a72" containerName="extract-utilities" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.703351 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acdf1f3-a701-46ab-85e0-ec5eeb966a72" containerName="extract-utilities" Jan 26 15:50:02 crc kubenswrapper[4713]: E0126 15:50:02.703396 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acdf1f3-a701-46ab-85e0-ec5eeb966a72" containerName="registry-server" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.703407 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acdf1f3-a701-46ab-85e0-ec5eeb966a72" containerName="registry-server" Jan 26 15:50:02 crc kubenswrapper[4713]: E0126 15:50:02.703443 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b6e0b7-905f-486d-9b41-26e8bc059e58" containerName="registry-server" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.703454 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b6e0b7-905f-486d-9b41-26e8bc059e58" containerName="registry-server" Jan 26 15:50:02 crc kubenswrapper[4713]: E0126 15:50:02.703469 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b6e0b7-905f-486d-9b41-26e8bc059e58" containerName="extract-content" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.703477 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b6e0b7-905f-486d-9b41-26e8bc059e58" containerName="extract-content" Jan 26 15:50:02 crc kubenswrapper[4713]: E0126 15:50:02.703490 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b6e0b7-905f-486d-9b41-26e8bc059e58" containerName="extract-utilities" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.703498 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b6e0b7-905f-486d-9b41-26e8bc059e58" containerName="extract-utilities" Jan 26 15:50:02 crc kubenswrapper[4713]: E0126 15:50:02.703512 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acdf1f3-a701-46ab-85e0-ec5eeb966a72" containerName="extract-content" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.703520 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acdf1f3-a701-46ab-85e0-ec5eeb966a72" containerName="extract-content" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.703665 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acdf1f3-a701-46ab-85e0-ec5eeb966a72" containerName="registry-server" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.703688 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b6e0b7-905f-486d-9b41-26e8bc059e58" containerName="registry-server" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.704788 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.707178 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.719990 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp"] Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.858279 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/83046eff-95ef-45f2-bdfa-24e38df1cfb0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp\" (UID: \"83046eff-95ef-45f2-bdfa-24e38df1cfb0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.859224 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg97j\" (UniqueName: \"kubernetes.io/projected/83046eff-95ef-45f2-bdfa-24e38df1cfb0-kube-api-access-dg97j\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp\" (UID: \"83046eff-95ef-45f2-bdfa-24e38df1cfb0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.859442 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/83046eff-95ef-45f2-bdfa-24e38df1cfb0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp\" (UID: \"83046eff-95ef-45f2-bdfa-24e38df1cfb0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.960343 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/83046eff-95ef-45f2-bdfa-24e38df1cfb0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp\" (UID: \"83046eff-95ef-45f2-bdfa-24e38df1cfb0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.960513 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/83046eff-95ef-45f2-bdfa-24e38df1cfb0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp\" (UID: \"83046eff-95ef-45f2-bdfa-24e38df1cfb0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.960542 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg97j\" (UniqueName: \"kubernetes.io/projected/83046eff-95ef-45f2-bdfa-24e38df1cfb0-kube-api-access-dg97j\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp\" (UID: \"83046eff-95ef-45f2-bdfa-24e38df1cfb0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.960806 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/83046eff-95ef-45f2-bdfa-24e38df1cfb0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp\" (UID: \"83046eff-95ef-45f2-bdfa-24e38df1cfb0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.961213 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/83046eff-95ef-45f2-bdfa-24e38df1cfb0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp\" (UID: \"83046eff-95ef-45f2-bdfa-24e38df1cfb0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.962039 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-p5wsk_adaaafc1-19f7-4240-bf6b-9c5c8adfa632/console/0.log" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.962101 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:50:02 crc kubenswrapper[4713]: I0126 15:50:02.984306 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg97j\" (UniqueName: \"kubernetes.io/projected/83046eff-95ef-45f2-bdfa-24e38df1cfb0-kube-api-access-dg97j\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp\" (UID: \"83046eff-95ef-45f2-bdfa-24e38df1cfb0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.035594 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.062050 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp5nq\" (UniqueName: \"kubernetes.io/projected/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-kube-api-access-vp5nq\") pod \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.062109 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-service-ca\") pod \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.062158 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-serving-cert\") pod \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.062186 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-config\") pod \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.062213 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-trusted-ca-bundle\") pod \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.062239 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-oauth-serving-cert\") pod \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.062265 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-oauth-config\") pod \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\" (UID: \"adaaafc1-19f7-4240-bf6b-9c5c8adfa632\") " Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.063610 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-config" (OuterVolumeSpecName: "console-config") pod "adaaafc1-19f7-4240-bf6b-9c5c8adfa632" (UID: "adaaafc1-19f7-4240-bf6b-9c5c8adfa632"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.064284 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "adaaafc1-19f7-4240-bf6b-9c5c8adfa632" (UID: "adaaafc1-19f7-4240-bf6b-9c5c8adfa632"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.064379 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-service-ca" (OuterVolumeSpecName: "service-ca") pod "adaaafc1-19f7-4240-bf6b-9c5c8adfa632" (UID: "adaaafc1-19f7-4240-bf6b-9c5c8adfa632"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.064388 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "adaaafc1-19f7-4240-bf6b-9c5c8adfa632" (UID: "adaaafc1-19f7-4240-bf6b-9c5c8adfa632"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.075098 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "adaaafc1-19f7-4240-bf6b-9c5c8adfa632" (UID: "adaaafc1-19f7-4240-bf6b-9c5c8adfa632"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.075281 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-kube-api-access-vp5nq" (OuterVolumeSpecName: "kube-api-access-vp5nq") pod "adaaafc1-19f7-4240-bf6b-9c5c8adfa632" (UID: "adaaafc1-19f7-4240-bf6b-9c5c8adfa632"). InnerVolumeSpecName "kube-api-access-vp5nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.075281 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "adaaafc1-19f7-4240-bf6b-9c5c8adfa632" (UID: "adaaafc1-19f7-4240-bf6b-9c5c8adfa632"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.164231 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp5nq\" (UniqueName: \"kubernetes.io/projected/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-kube-api-access-vp5nq\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.164279 4713 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.164292 4713 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.164303 4713 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.164318 4713 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.164328 4713 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.164343 4713 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adaaafc1-19f7-4240-bf6b-9c5c8adfa632-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.301248 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.301311 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.465012 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp"] Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.478127 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-p5wsk_adaaafc1-19f7-4240-bf6b-9c5c8adfa632/console/0.log" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.478188 4713 generic.go:334] "Generic (PLEG): container finished" podID="adaaafc1-19f7-4240-bf6b-9c5c8adfa632" containerID="6b70c1ae4e3e388995ced7861385778517dee1d02db82241b95855910aa86f20" exitCode=2 Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.478245 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p5wsk" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.478255 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p5wsk" event={"ID":"adaaafc1-19f7-4240-bf6b-9c5c8adfa632","Type":"ContainerDied","Data":"6b70c1ae4e3e388995ced7861385778517dee1d02db82241b95855910aa86f20"} Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.478286 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p5wsk" event={"ID":"adaaafc1-19f7-4240-bf6b-9c5c8adfa632","Type":"ContainerDied","Data":"c431014469e6018a13f7d8415185d5f87f7bbb938ce6c8af3d5f2b615457b3c3"} Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.478305 4713 scope.go:117] "RemoveContainer" containerID="6b70c1ae4e3e388995ced7861385778517dee1d02db82241b95855910aa86f20" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.483091 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" event={"ID":"83046eff-95ef-45f2-bdfa-24e38df1cfb0","Type":"ContainerStarted","Data":"61be6c439d04647da754a550743752cd6d2d84850bc1bcb7fb83f7b30cf305bc"} Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.497439 4713 scope.go:117] "RemoveContainer" containerID="6b70c1ae4e3e388995ced7861385778517dee1d02db82241b95855910aa86f20" Jan 26 15:50:03 crc kubenswrapper[4713]: E0126 15:50:03.498399 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b70c1ae4e3e388995ced7861385778517dee1d02db82241b95855910aa86f20\": container with ID starting with 6b70c1ae4e3e388995ced7861385778517dee1d02db82241b95855910aa86f20 not found: ID does not exist" containerID="6b70c1ae4e3e388995ced7861385778517dee1d02db82241b95855910aa86f20" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.498432 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b70c1ae4e3e388995ced7861385778517dee1d02db82241b95855910aa86f20"} err="failed to get container status \"6b70c1ae4e3e388995ced7861385778517dee1d02db82241b95855910aa86f20\": rpc error: code = NotFound desc = could not find container \"6b70c1ae4e3e388995ced7861385778517dee1d02db82241b95855910aa86f20\": container with ID starting with 6b70c1ae4e3e388995ced7861385778517dee1d02db82241b95855910aa86f20 not found: ID does not exist" Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.511726 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-p5wsk"] Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.516675 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-p5wsk"] Jan 26 15:50:03 crc kubenswrapper[4713]: I0126 15:50:03.811686 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adaaafc1-19f7-4240-bf6b-9c5c8adfa632" path="/var/lib/kubelet/pods/adaaafc1-19f7-4240-bf6b-9c5c8adfa632/volumes" Jan 26 15:50:04 crc kubenswrapper[4713]: I0126 15:50:04.512881 4713 generic.go:334] "Generic (PLEG): container finished" podID="83046eff-95ef-45f2-bdfa-24e38df1cfb0" containerID="6d2132bac93838051921064ea0668078f1e7011a32867aeb3f6cfb65e160aba5" exitCode=0 Jan 26 15:50:04 crc kubenswrapper[4713]: I0126 15:50:04.512958 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" event={"ID":"83046eff-95ef-45f2-bdfa-24e38df1cfb0","Type":"ContainerDied","Data":"6d2132bac93838051921064ea0668078f1e7011a32867aeb3f6cfb65e160aba5"} Jan 26 15:50:06 crc kubenswrapper[4713]: I0126 15:50:06.530981 4713 generic.go:334] "Generic (PLEG): container finished" podID="83046eff-95ef-45f2-bdfa-24e38df1cfb0" containerID="8c6dbeab3818309b8f122551126a20cd6d132fb38cd9b83100e536375ddaae10" exitCode=0 Jan 26 15:50:06 crc kubenswrapper[4713]: I0126 15:50:06.531026 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" event={"ID":"83046eff-95ef-45f2-bdfa-24e38df1cfb0","Type":"ContainerDied","Data":"8c6dbeab3818309b8f122551126a20cd6d132fb38cd9b83100e536375ddaae10"} Jan 26 15:50:07 crc kubenswrapper[4713]: I0126 15:50:07.538866 4713 generic.go:334] "Generic (PLEG): container finished" podID="83046eff-95ef-45f2-bdfa-24e38df1cfb0" containerID="0f9266578582e673b0822833be7e51951d236331c30922200cc31d646a989c3b" exitCode=0 Jan 26 15:50:07 crc kubenswrapper[4713]: I0126 15:50:07.538964 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" event={"ID":"83046eff-95ef-45f2-bdfa-24e38df1cfb0","Type":"ContainerDied","Data":"0f9266578582e673b0822833be7e51951d236331c30922200cc31d646a989c3b"} Jan 26 15:50:08 crc kubenswrapper[4713]: I0126 15:50:08.810518 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" Jan 26 15:50:08 crc kubenswrapper[4713]: I0126 15:50:08.936689 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/83046eff-95ef-45f2-bdfa-24e38df1cfb0-bundle\") pod \"83046eff-95ef-45f2-bdfa-24e38df1cfb0\" (UID: \"83046eff-95ef-45f2-bdfa-24e38df1cfb0\") " Jan 26 15:50:08 crc kubenswrapper[4713]: I0126 15:50:08.936763 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/83046eff-95ef-45f2-bdfa-24e38df1cfb0-util\") pod \"83046eff-95ef-45f2-bdfa-24e38df1cfb0\" (UID: \"83046eff-95ef-45f2-bdfa-24e38df1cfb0\") " Jan 26 15:50:08 crc kubenswrapper[4713]: I0126 15:50:08.936838 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg97j\" (UniqueName: \"kubernetes.io/projected/83046eff-95ef-45f2-bdfa-24e38df1cfb0-kube-api-access-dg97j\") pod \"83046eff-95ef-45f2-bdfa-24e38df1cfb0\" (UID: \"83046eff-95ef-45f2-bdfa-24e38df1cfb0\") " Jan 26 15:50:08 crc kubenswrapper[4713]: I0126 15:50:08.938301 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83046eff-95ef-45f2-bdfa-24e38df1cfb0-bundle" (OuterVolumeSpecName: "bundle") pod "83046eff-95ef-45f2-bdfa-24e38df1cfb0" (UID: "83046eff-95ef-45f2-bdfa-24e38df1cfb0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:50:08 crc kubenswrapper[4713]: I0126 15:50:08.947680 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83046eff-95ef-45f2-bdfa-24e38df1cfb0-kube-api-access-dg97j" (OuterVolumeSpecName: "kube-api-access-dg97j") pod "83046eff-95ef-45f2-bdfa-24e38df1cfb0" (UID: "83046eff-95ef-45f2-bdfa-24e38df1cfb0"). InnerVolumeSpecName "kube-api-access-dg97j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:50:08 crc kubenswrapper[4713]: I0126 15:50:08.977253 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ljqfl"] Jan 26 15:50:08 crc kubenswrapper[4713]: E0126 15:50:08.977504 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83046eff-95ef-45f2-bdfa-24e38df1cfb0" containerName="pull" Jan 26 15:50:08 crc kubenswrapper[4713]: I0126 15:50:08.977517 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="83046eff-95ef-45f2-bdfa-24e38df1cfb0" containerName="pull" Jan 26 15:50:08 crc kubenswrapper[4713]: E0126 15:50:08.977534 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83046eff-95ef-45f2-bdfa-24e38df1cfb0" containerName="util" Jan 26 15:50:08 crc kubenswrapper[4713]: I0126 15:50:08.977539 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="83046eff-95ef-45f2-bdfa-24e38df1cfb0" containerName="util" Jan 26 15:50:08 crc kubenswrapper[4713]: E0126 15:50:08.977550 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adaaafc1-19f7-4240-bf6b-9c5c8adfa632" containerName="console" Jan 26 15:50:08 crc kubenswrapper[4713]: I0126 15:50:08.977558 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="adaaafc1-19f7-4240-bf6b-9c5c8adfa632" containerName="console" Jan 26 15:50:08 crc kubenswrapper[4713]: E0126 15:50:08.977570 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83046eff-95ef-45f2-bdfa-24e38df1cfb0" containerName="extract" Jan 26 15:50:08 crc kubenswrapper[4713]: I0126 15:50:08.977576 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="83046eff-95ef-45f2-bdfa-24e38df1cfb0" containerName="extract" Jan 26 15:50:08 crc kubenswrapper[4713]: I0126 15:50:08.977675 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="adaaafc1-19f7-4240-bf6b-9c5c8adfa632" containerName="console" Jan 26 15:50:08 crc kubenswrapper[4713]: I0126 15:50:08.977687 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="83046eff-95ef-45f2-bdfa-24e38df1cfb0" containerName="extract" Jan 26 15:50:08 crc kubenswrapper[4713]: I0126 15:50:08.978628 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:08 crc kubenswrapper[4713]: I0126 15:50:08.991659 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljqfl"] Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.038044 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjrbl\" (UniqueName: \"kubernetes.io/projected/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-kube-api-access-wjrbl\") pod \"community-operators-ljqfl\" (UID: \"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1\") " pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.038123 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-catalog-content\") pod \"community-operators-ljqfl\" (UID: \"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1\") " pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.038190 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-utilities\") pod \"community-operators-ljqfl\" (UID: \"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1\") " pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.038271 4713 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/83046eff-95ef-45f2-bdfa-24e38df1cfb0-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.038285 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg97j\" (UniqueName: \"kubernetes.io/projected/83046eff-95ef-45f2-bdfa-24e38df1cfb0-kube-api-access-dg97j\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.139961 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjrbl\" (UniqueName: \"kubernetes.io/projected/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-kube-api-access-wjrbl\") pod \"community-operators-ljqfl\" (UID: \"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1\") " pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.140035 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-catalog-content\") pod \"community-operators-ljqfl\" (UID: \"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1\") " pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.140074 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-utilities\") pod \"community-operators-ljqfl\" (UID: \"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1\") " pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.140492 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-utilities\") pod \"community-operators-ljqfl\" (UID: \"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1\") " pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.140510 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-catalog-content\") pod \"community-operators-ljqfl\" (UID: \"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1\") " pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.163225 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjrbl\" (UniqueName: \"kubernetes.io/projected/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-kube-api-access-wjrbl\") pod \"community-operators-ljqfl\" (UID: \"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1\") " pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.168016 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83046eff-95ef-45f2-bdfa-24e38df1cfb0-util" (OuterVolumeSpecName: "util") pod "83046eff-95ef-45f2-bdfa-24e38df1cfb0" (UID: "83046eff-95ef-45f2-bdfa-24e38df1cfb0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.241707 4713 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/83046eff-95ef-45f2-bdfa-24e38df1cfb0-util\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.297683 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.554241 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" event={"ID":"83046eff-95ef-45f2-bdfa-24e38df1cfb0","Type":"ContainerDied","Data":"61be6c439d04647da754a550743752cd6d2d84850bc1bcb7fb83f7b30cf305bc"} Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.554703 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61be6c439d04647da754a550743752cd6d2d84850bc1bcb7fb83f7b30cf305bc" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.554336 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp" Jan 26 15:50:09 crc kubenswrapper[4713]: I0126 15:50:09.798425 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljqfl"] Jan 26 15:50:10 crc kubenswrapper[4713]: I0126 15:50:10.562973 4713 generic.go:334] "Generic (PLEG): container finished" podID="3423a7b8-8cbd-41de-ad2b-1bcaa067faa1" containerID="e3e8f09748b5516658077164992f4929febe1ee4547c3a337b1b22d043a407db" exitCode=0 Jan 26 15:50:10 crc kubenswrapper[4713]: I0126 15:50:10.563046 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljqfl" event={"ID":"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1","Type":"ContainerDied","Data":"e3e8f09748b5516658077164992f4929febe1ee4547c3a337b1b22d043a407db"} Jan 26 15:50:10 crc kubenswrapper[4713]: I0126 15:50:10.563266 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljqfl" event={"ID":"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1","Type":"ContainerStarted","Data":"cff8c57b69cdade6f2983bb5faa8bd2b42f38ef97a0d5d9723bc842aa1993a56"} Jan 26 15:50:11 crc kubenswrapper[4713]: I0126 15:50:11.572419 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljqfl" event={"ID":"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1","Type":"ContainerStarted","Data":"dc6d9aef653f681a356fa35c2d9c39ac87f5306354e077d13fbe4e334071d5ee"} Jan 26 15:50:12 crc kubenswrapper[4713]: I0126 15:50:12.581757 4713 generic.go:334] "Generic (PLEG): container finished" podID="3423a7b8-8cbd-41de-ad2b-1bcaa067faa1" containerID="dc6d9aef653f681a356fa35c2d9c39ac87f5306354e077d13fbe4e334071d5ee" exitCode=0 Jan 26 15:50:12 crc kubenswrapper[4713]: I0126 15:50:12.581806 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljqfl" event={"ID":"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1","Type":"ContainerDied","Data":"dc6d9aef653f681a356fa35c2d9c39ac87f5306354e077d13fbe4e334071d5ee"} Jan 26 15:50:13 crc kubenswrapper[4713]: I0126 15:50:13.588741 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljqfl" event={"ID":"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1","Type":"ContainerStarted","Data":"276b8b2c00c91f8e3269cb835e0f4a6c6da8970c5ca76187bf864ae6b11572e7"} Jan 26 15:50:13 crc kubenswrapper[4713]: I0126 15:50:13.620751 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ljqfl" podStartSLOduration=3.210764864 podStartE2EDuration="5.620730526s" podCreationTimestamp="2026-01-26 15:50:08 +0000 UTC" firstStartedPulling="2026-01-26 15:50:10.564325911 +0000 UTC m=+985.701343146" lastFinishedPulling="2026-01-26 15:50:12.974291573 +0000 UTC m=+988.111308808" observedRunningTime="2026-01-26 15:50:13.615984779 +0000 UTC m=+988.753002014" watchObservedRunningTime="2026-01-26 15:50:13.620730526 +0000 UTC m=+988.757747761" Jan 26 15:50:19 crc kubenswrapper[4713]: I0126 15:50:19.298437 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:19 crc kubenswrapper[4713]: I0126 15:50:19.298956 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:19 crc kubenswrapper[4713]: I0126 15:50:19.354199 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:19 crc kubenswrapper[4713]: I0126 15:50:19.671415 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.148545 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb"] Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.149802 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.152292 4713 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.152533 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.153235 4713 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-s59pt" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.156878 4713 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.157793 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.164702 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb"] Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.306600 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwfh4\" (UniqueName: \"kubernetes.io/projected/351bcd96-d2cb-4d74-8794-69a879f52c35-kube-api-access-zwfh4\") pod \"metallb-operator-controller-manager-778b445bd5-8bzgb\" (UID: \"351bcd96-d2cb-4d74-8794-69a879f52c35\") " pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.306694 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/351bcd96-d2cb-4d74-8794-69a879f52c35-apiservice-cert\") pod \"metallb-operator-controller-manager-778b445bd5-8bzgb\" (UID: \"351bcd96-d2cb-4d74-8794-69a879f52c35\") " pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.306723 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/351bcd96-d2cb-4d74-8794-69a879f52c35-webhook-cert\") pod \"metallb-operator-controller-manager-778b445bd5-8bzgb\" (UID: \"351bcd96-d2cb-4d74-8794-69a879f52c35\") " pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.408290 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/351bcd96-d2cb-4d74-8794-69a879f52c35-apiservice-cert\") pod \"metallb-operator-controller-manager-778b445bd5-8bzgb\" (UID: \"351bcd96-d2cb-4d74-8794-69a879f52c35\") " pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.408382 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/351bcd96-d2cb-4d74-8794-69a879f52c35-webhook-cert\") pod \"metallb-operator-controller-manager-778b445bd5-8bzgb\" (UID: \"351bcd96-d2cb-4d74-8794-69a879f52c35\") " pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.408449 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwfh4\" (UniqueName: \"kubernetes.io/projected/351bcd96-d2cb-4d74-8794-69a879f52c35-kube-api-access-zwfh4\") pod \"metallb-operator-controller-manager-778b445bd5-8bzgb\" (UID: \"351bcd96-d2cb-4d74-8794-69a879f52c35\") " pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.414087 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/351bcd96-d2cb-4d74-8794-69a879f52c35-webhook-cert\") pod \"metallb-operator-controller-manager-778b445bd5-8bzgb\" (UID: \"351bcd96-d2cb-4d74-8794-69a879f52c35\") " pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.414259 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/351bcd96-d2cb-4d74-8794-69a879f52c35-apiservice-cert\") pod \"metallb-operator-controller-manager-778b445bd5-8bzgb\" (UID: \"351bcd96-d2cb-4d74-8794-69a879f52c35\") " pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.433294 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwfh4\" (UniqueName: \"kubernetes.io/projected/351bcd96-d2cb-4d74-8794-69a879f52c35-kube-api-access-zwfh4\") pod \"metallb-operator-controller-manager-778b445bd5-8bzgb\" (UID: \"351bcd96-d2cb-4d74-8794-69a879f52c35\") " pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.468991 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.487556 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc"] Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.488490 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.491349 4713 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.491509 4713 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-qxm2z" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.491951 4713 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.566999 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc"] Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.611913 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428-webhook-cert\") pod \"metallb-operator-webhook-server-6c895c556d-p2djc\" (UID: \"b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428\") " pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.612200 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9drq\" (UniqueName: \"kubernetes.io/projected/b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428-kube-api-access-l9drq\") pod \"metallb-operator-webhook-server-6c895c556d-p2djc\" (UID: \"b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428\") " pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.612246 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428-apiservice-cert\") pod \"metallb-operator-webhook-server-6c895c556d-p2djc\" (UID: \"b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428\") " pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.713251 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428-apiservice-cert\") pod \"metallb-operator-webhook-server-6c895c556d-p2djc\" (UID: \"b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428\") " pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.713344 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428-webhook-cert\") pod \"metallb-operator-webhook-server-6c895c556d-p2djc\" (UID: \"b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428\") " pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.713427 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9drq\" (UniqueName: \"kubernetes.io/projected/b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428-kube-api-access-l9drq\") pod \"metallb-operator-webhook-server-6c895c556d-p2djc\" (UID: \"b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428\") " pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.718837 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428-apiservice-cert\") pod \"metallb-operator-webhook-server-6c895c556d-p2djc\" (UID: \"b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428\") " pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.729723 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9drq\" (UniqueName: \"kubernetes.io/projected/b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428-kube-api-access-l9drq\") pod \"metallb-operator-webhook-server-6c895c556d-p2djc\" (UID: \"b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428\") " pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.733464 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428-webhook-cert\") pod \"metallb-operator-webhook-server-6c895c556d-p2djc\" (UID: \"b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428\") " pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.859965 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" Jan 26 15:50:20 crc kubenswrapper[4713]: I0126 15:50:20.951383 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb"] Jan 26 15:50:21 crc kubenswrapper[4713]: I0126 15:50:21.320224 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc"] Jan 26 15:50:21 crc kubenswrapper[4713]: W0126 15:50:21.324340 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6ba8417_c80d_4ef5_b5d9_d93ce9c6c428.slice/crio-080c17dca1d8a4ba7e26d7ff2076d686d9589b2aaf403f135f8ec94cee5b8cd6 WatchSource:0}: Error finding container 080c17dca1d8a4ba7e26d7ff2076d686d9589b2aaf403f135f8ec94cee5b8cd6: Status 404 returned error can't find the container with id 080c17dca1d8a4ba7e26d7ff2076d686d9589b2aaf403f135f8ec94cee5b8cd6 Jan 26 15:50:21 crc kubenswrapper[4713]: I0126 15:50:21.567011 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljqfl"] Jan 26 15:50:21 crc kubenswrapper[4713]: I0126 15:50:21.719810 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" event={"ID":"351bcd96-d2cb-4d74-8794-69a879f52c35","Type":"ContainerStarted","Data":"972f4473a91ead485e98f3720881f098e5960236cf3c37e437ff155667cf41f8"} Jan 26 15:50:21 crc kubenswrapper[4713]: I0126 15:50:21.720750 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" event={"ID":"b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428","Type":"ContainerStarted","Data":"080c17dca1d8a4ba7e26d7ff2076d686d9589b2aaf403f135f8ec94cee5b8cd6"} Jan 26 15:50:21 crc kubenswrapper[4713]: I0126 15:50:21.720952 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ljqfl" podUID="3423a7b8-8cbd-41de-ad2b-1bcaa067faa1" containerName="registry-server" containerID="cri-o://276b8b2c00c91f8e3269cb835e0f4a6c6da8970c5ca76187bf864ae6b11572e7" gracePeriod=2 Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.083439 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.134283 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-catalog-content\") pod \"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1\" (UID: \"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1\") " Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.134455 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjrbl\" (UniqueName: \"kubernetes.io/projected/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-kube-api-access-wjrbl\") pod \"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1\" (UID: \"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1\") " Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.134498 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-utilities\") pod \"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1\" (UID: \"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1\") " Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.135682 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-utilities" (OuterVolumeSpecName: "utilities") pod "3423a7b8-8cbd-41de-ad2b-1bcaa067faa1" (UID: "3423a7b8-8cbd-41de-ad2b-1bcaa067faa1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.139842 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-kube-api-access-wjrbl" (OuterVolumeSpecName: "kube-api-access-wjrbl") pod "3423a7b8-8cbd-41de-ad2b-1bcaa067faa1" (UID: "3423a7b8-8cbd-41de-ad2b-1bcaa067faa1"). InnerVolumeSpecName "kube-api-access-wjrbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.187088 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3423a7b8-8cbd-41de-ad2b-1bcaa067faa1" (UID: "3423a7b8-8cbd-41de-ad2b-1bcaa067faa1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.236285 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjrbl\" (UniqueName: \"kubernetes.io/projected/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-kube-api-access-wjrbl\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.236328 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.236354 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.733653 4713 generic.go:334] "Generic (PLEG): container finished" podID="3423a7b8-8cbd-41de-ad2b-1bcaa067faa1" containerID="276b8b2c00c91f8e3269cb835e0f4a6c6da8970c5ca76187bf864ae6b11572e7" exitCode=0 Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.733718 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljqfl" event={"ID":"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1","Type":"ContainerDied","Data":"276b8b2c00c91f8e3269cb835e0f4a6c6da8970c5ca76187bf864ae6b11572e7"} Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.733787 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljqfl" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.733845 4713 scope.go:117] "RemoveContainer" containerID="276b8b2c00c91f8e3269cb835e0f4a6c6da8970c5ca76187bf864ae6b11572e7" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.733808 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljqfl" event={"ID":"3423a7b8-8cbd-41de-ad2b-1bcaa067faa1","Type":"ContainerDied","Data":"cff8c57b69cdade6f2983bb5faa8bd2b42f38ef97a0d5d9723bc842aa1993a56"} Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.771739 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljqfl"] Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.776261 4713 scope.go:117] "RemoveContainer" containerID="dc6d9aef653f681a356fa35c2d9c39ac87f5306354e077d13fbe4e334071d5ee" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.778218 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ljqfl"] Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.824574 4713 scope.go:117] "RemoveContainer" containerID="e3e8f09748b5516658077164992f4929febe1ee4547c3a337b1b22d043a407db" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.843551 4713 scope.go:117] "RemoveContainer" containerID="276b8b2c00c91f8e3269cb835e0f4a6c6da8970c5ca76187bf864ae6b11572e7" Jan 26 15:50:22 crc kubenswrapper[4713]: E0126 15:50:22.844087 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"276b8b2c00c91f8e3269cb835e0f4a6c6da8970c5ca76187bf864ae6b11572e7\": container with ID starting with 276b8b2c00c91f8e3269cb835e0f4a6c6da8970c5ca76187bf864ae6b11572e7 not found: ID does not exist" containerID="276b8b2c00c91f8e3269cb835e0f4a6c6da8970c5ca76187bf864ae6b11572e7" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.844143 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"276b8b2c00c91f8e3269cb835e0f4a6c6da8970c5ca76187bf864ae6b11572e7"} err="failed to get container status \"276b8b2c00c91f8e3269cb835e0f4a6c6da8970c5ca76187bf864ae6b11572e7\": rpc error: code = NotFound desc = could not find container \"276b8b2c00c91f8e3269cb835e0f4a6c6da8970c5ca76187bf864ae6b11572e7\": container with ID starting with 276b8b2c00c91f8e3269cb835e0f4a6c6da8970c5ca76187bf864ae6b11572e7 not found: ID does not exist" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.844174 4713 scope.go:117] "RemoveContainer" containerID="dc6d9aef653f681a356fa35c2d9c39ac87f5306354e077d13fbe4e334071d5ee" Jan 26 15:50:22 crc kubenswrapper[4713]: E0126 15:50:22.844702 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc6d9aef653f681a356fa35c2d9c39ac87f5306354e077d13fbe4e334071d5ee\": container with ID starting with dc6d9aef653f681a356fa35c2d9c39ac87f5306354e077d13fbe4e334071d5ee not found: ID does not exist" containerID="dc6d9aef653f681a356fa35c2d9c39ac87f5306354e077d13fbe4e334071d5ee" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.844734 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc6d9aef653f681a356fa35c2d9c39ac87f5306354e077d13fbe4e334071d5ee"} err="failed to get container status \"dc6d9aef653f681a356fa35c2d9c39ac87f5306354e077d13fbe4e334071d5ee\": rpc error: code = NotFound desc = could not find container \"dc6d9aef653f681a356fa35c2d9c39ac87f5306354e077d13fbe4e334071d5ee\": container with ID starting with dc6d9aef653f681a356fa35c2d9c39ac87f5306354e077d13fbe4e334071d5ee not found: ID does not exist" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.844755 4713 scope.go:117] "RemoveContainer" containerID="e3e8f09748b5516658077164992f4929febe1ee4547c3a337b1b22d043a407db" Jan 26 15:50:22 crc kubenswrapper[4713]: E0126 15:50:22.845047 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3e8f09748b5516658077164992f4929febe1ee4547c3a337b1b22d043a407db\": container with ID starting with e3e8f09748b5516658077164992f4929febe1ee4547c3a337b1b22d043a407db not found: ID does not exist" containerID="e3e8f09748b5516658077164992f4929febe1ee4547c3a337b1b22d043a407db" Jan 26 15:50:22 crc kubenswrapper[4713]: I0126 15:50:22.845071 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3e8f09748b5516658077164992f4929febe1ee4547c3a337b1b22d043a407db"} err="failed to get container status \"e3e8f09748b5516658077164992f4929febe1ee4547c3a337b1b22d043a407db\": rpc error: code = NotFound desc = could not find container \"e3e8f09748b5516658077164992f4929febe1ee4547c3a337b1b22d043a407db\": container with ID starting with e3e8f09748b5516658077164992f4929febe1ee4547c3a337b1b22d043a407db not found: ID does not exist" Jan 26 15:50:23 crc kubenswrapper[4713]: I0126 15:50:23.812326 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3423a7b8-8cbd-41de-ad2b-1bcaa067faa1" path="/var/lib/kubelet/pods/3423a7b8-8cbd-41de-ad2b-1bcaa067faa1/volumes" Jan 26 15:50:24 crc kubenswrapper[4713]: I0126 15:50:24.751708 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" event={"ID":"351bcd96-d2cb-4d74-8794-69a879f52c35","Type":"ContainerStarted","Data":"abf87f80327e7dd73af1f351075d046b0a7ecf22d365550ac0ab80d59a4471ba"} Jan 26 15:50:24 crc kubenswrapper[4713]: I0126 15:50:24.751812 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" Jan 26 15:50:24 crc kubenswrapper[4713]: I0126 15:50:24.777988 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" podStartSLOduration=1.387854605 podStartE2EDuration="4.777959503s" podCreationTimestamp="2026-01-26 15:50:20 +0000 UTC" firstStartedPulling="2026-01-26 15:50:20.971075273 +0000 UTC m=+996.108092508" lastFinishedPulling="2026-01-26 15:50:24.361180161 +0000 UTC m=+999.498197406" observedRunningTime="2026-01-26 15:50:24.774407441 +0000 UTC m=+999.911424686" watchObservedRunningTime="2026-01-26 15:50:24.777959503 +0000 UTC m=+999.914976748" Jan 26 15:50:26 crc kubenswrapper[4713]: I0126 15:50:26.765803 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" event={"ID":"b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428","Type":"ContainerStarted","Data":"dd4cb0fcf3d462904f1a3ed69cd9286ee2e24999dd83679824d28d9de2a4dbf5"} Jan 26 15:50:26 crc kubenswrapper[4713]: I0126 15:50:26.766466 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" Jan 26 15:50:26 crc kubenswrapper[4713]: I0126 15:50:26.792330 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" podStartSLOduration=1.553692393 podStartE2EDuration="6.792310342s" podCreationTimestamp="2026-01-26 15:50:20 +0000 UTC" firstStartedPulling="2026-01-26 15:50:21.327542551 +0000 UTC m=+996.464559786" lastFinishedPulling="2026-01-26 15:50:26.5661605 +0000 UTC m=+1001.703177735" observedRunningTime="2026-01-26 15:50:26.790740307 +0000 UTC m=+1001.927757542" watchObservedRunningTime="2026-01-26 15:50:26.792310342 +0000 UTC m=+1001.929327597" Jan 26 15:50:33 crc kubenswrapper[4713]: I0126 15:50:33.301909 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:50:33 crc kubenswrapper[4713]: I0126 15:50:33.302192 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:50:40 crc kubenswrapper[4713]: I0126 15:50:40.864214 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6c895c556d-p2djc" Jan 26 15:51:00 crc kubenswrapper[4713]: I0126 15:51:00.471985 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-778b445bd5-8bzgb" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.283295 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-66jj8"] Jan 26 15:51:01 crc kubenswrapper[4713]: E0126 15:51:01.283707 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3423a7b8-8cbd-41de-ad2b-1bcaa067faa1" containerName="registry-server" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.283736 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="3423a7b8-8cbd-41de-ad2b-1bcaa067faa1" containerName="registry-server" Jan 26 15:51:01 crc kubenswrapper[4713]: E0126 15:51:01.283769 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3423a7b8-8cbd-41de-ad2b-1bcaa067faa1" containerName="extract-content" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.283780 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="3423a7b8-8cbd-41de-ad2b-1bcaa067faa1" containerName="extract-content" Jan 26 15:51:01 crc kubenswrapper[4713]: E0126 15:51:01.283797 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3423a7b8-8cbd-41de-ad2b-1bcaa067faa1" containerName="extract-utilities" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.283810 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="3423a7b8-8cbd-41de-ad2b-1bcaa067faa1" containerName="extract-utilities" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.283998 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="3423a7b8-8cbd-41de-ad2b-1bcaa067faa1" containerName="registry-server" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.287604 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.288473 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45"] Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.289346 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.290899 4713 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-wstts" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.291267 4713 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.291558 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.292138 4713 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.302815 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45"] Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.370627 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-nwj9r"] Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.371856 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nwj9r" Jan 26 15:51:01 crc kubenswrapper[4713]: W0126 15:51:01.376446 4713 reflector.go:561] object-"metallb-system"/"metallb-excludel2": failed to list *v1.ConfigMap: configmaps "metallb-excludel2" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 15:51:01 crc kubenswrapper[4713]: W0126 15:51:01.376498 4713 reflector.go:561] object-"metallb-system"/"speaker-certs-secret": failed to list *v1.Secret: secrets "speaker-certs-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 15:51:01 crc kubenswrapper[4713]: W0126 15:51:01.376512 4713 reflector.go:561] object-"metallb-system"/"metallb-memberlist": failed to list *v1.Secret: secrets "metallb-memberlist" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 15:51:01 crc kubenswrapper[4713]: E0126 15:51:01.376526 4713 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"speaker-certs-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 15:51:01 crc kubenswrapper[4713]: W0126 15:51:01.376537 4713 reflector.go:561] object-"metallb-system"/"speaker-dockercfg-dwtw6": failed to list *v1.Secret: secrets "speaker-dockercfg-dwtw6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 15:51:01 crc kubenswrapper[4713]: E0126 15:51:01.376562 4713 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-dockercfg-dwtw6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"speaker-dockercfg-dwtw6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 15:51:01 crc kubenswrapper[4713]: E0126 15:51:01.376506 4713 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-excludel2\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"metallb-excludel2\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 15:51:01 crc kubenswrapper[4713]: E0126 15:51:01.376541 4713 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-memberlist\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-memberlist\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.409528 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-xqkgk"] Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.411896 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-xqkgk" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.415638 4713 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.419792 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-xqkgk"] Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.440877 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/434ec099-efe3-4f0e-812c-2b684c7f8274-frr-sockets\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.440948 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43b745e9-8cc0-4186-bf90-355ce248ab27-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-t4l45\" (UID: \"43b745e9-8cc0-4186-bf90-355ce248ab27\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.440977 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfstn\" (UniqueName: \"kubernetes.io/projected/434ec099-efe3-4f0e-812c-2b684c7f8274-kube-api-access-pfstn\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.441015 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/434ec099-efe3-4f0e-812c-2b684c7f8274-frr-conf\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.441059 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2css8\" (UniqueName: \"kubernetes.io/projected/43b745e9-8cc0-4186-bf90-355ce248ab27-kube-api-access-2css8\") pod \"frr-k8s-webhook-server-7df86c4f6c-t4l45\" (UID: \"43b745e9-8cc0-4186-bf90-355ce248ab27\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.441089 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/434ec099-efe3-4f0e-812c-2b684c7f8274-metrics\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.441120 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/434ec099-efe3-4f0e-812c-2b684c7f8274-frr-startup\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.441143 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/434ec099-efe3-4f0e-812c-2b684c7f8274-reloader\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.441169 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/434ec099-efe3-4f0e-812c-2b684c7f8274-metrics-certs\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543040 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-memberlist\") pod \"speaker-nwj9r\" (UID: \"5d4dd3fb-43d0-46a8-9a41-1122358e82ce\") " pod="metallb-system/speaker-nwj9r" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543124 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23261bd8-2fa1-4f97-851f-85aff45181b8-cert\") pod \"controller-6968d8fdc4-xqkgk\" (UID: \"23261bd8-2fa1-4f97-851f-85aff45181b8\") " pod="metallb-system/controller-6968d8fdc4-xqkgk" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543161 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/434ec099-efe3-4f0e-812c-2b684c7f8274-frr-sockets\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543179 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-metrics-certs\") pod \"speaker-nwj9r\" (UID: \"5d4dd3fb-43d0-46a8-9a41-1122358e82ce\") " pod="metallb-system/speaker-nwj9r" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543222 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43b745e9-8cc0-4186-bf90-355ce248ab27-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-t4l45\" (UID: \"43b745e9-8cc0-4186-bf90-355ce248ab27\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543251 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n95sw\" (UniqueName: \"kubernetes.io/projected/23261bd8-2fa1-4f97-851f-85aff45181b8-kube-api-access-n95sw\") pod \"controller-6968d8fdc4-xqkgk\" (UID: \"23261bd8-2fa1-4f97-851f-85aff45181b8\") " pod="metallb-system/controller-6968d8fdc4-xqkgk" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543272 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfstn\" (UniqueName: \"kubernetes.io/projected/434ec099-efe3-4f0e-812c-2b684c7f8274-kube-api-access-pfstn\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543289 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blxb8\" (UniqueName: \"kubernetes.io/projected/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-kube-api-access-blxb8\") pod \"speaker-nwj9r\" (UID: \"5d4dd3fb-43d0-46a8-9a41-1122358e82ce\") " pod="metallb-system/speaker-nwj9r" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543306 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/434ec099-efe3-4f0e-812c-2b684c7f8274-frr-conf\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543329 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23261bd8-2fa1-4f97-851f-85aff45181b8-metrics-certs\") pod \"controller-6968d8fdc4-xqkgk\" (UID: \"23261bd8-2fa1-4f97-851f-85aff45181b8\") " pod="metallb-system/controller-6968d8fdc4-xqkgk" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543349 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2css8\" (UniqueName: \"kubernetes.io/projected/43b745e9-8cc0-4186-bf90-355ce248ab27-kube-api-access-2css8\") pod \"frr-k8s-webhook-server-7df86c4f6c-t4l45\" (UID: \"43b745e9-8cc0-4186-bf90-355ce248ab27\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543388 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-metallb-excludel2\") pod \"speaker-nwj9r\" (UID: \"5d4dd3fb-43d0-46a8-9a41-1122358e82ce\") " pod="metallb-system/speaker-nwj9r" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543412 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/434ec099-efe3-4f0e-812c-2b684c7f8274-metrics\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543432 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/434ec099-efe3-4f0e-812c-2b684c7f8274-frr-startup\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543448 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/434ec099-efe3-4f0e-812c-2b684c7f8274-reloader\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.543463 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/434ec099-efe3-4f0e-812c-2b684c7f8274-metrics-certs\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.544205 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/434ec099-efe3-4f0e-812c-2b684c7f8274-frr-sockets\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.544308 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/434ec099-efe3-4f0e-812c-2b684c7f8274-frr-conf\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.544605 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/434ec099-efe3-4f0e-812c-2b684c7f8274-reloader\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.544798 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/434ec099-efe3-4f0e-812c-2b684c7f8274-frr-startup\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.544915 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/434ec099-efe3-4f0e-812c-2b684c7f8274-metrics\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.550880 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/434ec099-efe3-4f0e-812c-2b684c7f8274-metrics-certs\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.559004 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43b745e9-8cc0-4186-bf90-355ce248ab27-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-t4l45\" (UID: \"43b745e9-8cc0-4186-bf90-355ce248ab27\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.562037 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfstn\" (UniqueName: \"kubernetes.io/projected/434ec099-efe3-4f0e-812c-2b684c7f8274-kube-api-access-pfstn\") pod \"frr-k8s-66jj8\" (UID: \"434ec099-efe3-4f0e-812c-2b684c7f8274\") " pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.563919 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2css8\" (UniqueName: \"kubernetes.io/projected/43b745e9-8cc0-4186-bf90-355ce248ab27-kube-api-access-2css8\") pod \"frr-k8s-webhook-server-7df86c4f6c-t4l45\" (UID: \"43b745e9-8cc0-4186-bf90-355ce248ab27\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.608091 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.621703 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.644337 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blxb8\" (UniqueName: \"kubernetes.io/projected/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-kube-api-access-blxb8\") pod \"speaker-nwj9r\" (UID: \"5d4dd3fb-43d0-46a8-9a41-1122358e82ce\") " pod="metallb-system/speaker-nwj9r" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.644429 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23261bd8-2fa1-4f97-851f-85aff45181b8-metrics-certs\") pod \"controller-6968d8fdc4-xqkgk\" (UID: \"23261bd8-2fa1-4f97-851f-85aff45181b8\") " pod="metallb-system/controller-6968d8fdc4-xqkgk" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.644473 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-metallb-excludel2\") pod \"speaker-nwj9r\" (UID: \"5d4dd3fb-43d0-46a8-9a41-1122358e82ce\") " pod="metallb-system/speaker-nwj9r" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.644539 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-memberlist\") pod \"speaker-nwj9r\" (UID: \"5d4dd3fb-43d0-46a8-9a41-1122358e82ce\") " pod="metallb-system/speaker-nwj9r" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.644565 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23261bd8-2fa1-4f97-851f-85aff45181b8-cert\") pod \"controller-6968d8fdc4-xqkgk\" (UID: \"23261bd8-2fa1-4f97-851f-85aff45181b8\") " pod="metallb-system/controller-6968d8fdc4-xqkgk" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.644594 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-metrics-certs\") pod \"speaker-nwj9r\" (UID: \"5d4dd3fb-43d0-46a8-9a41-1122358e82ce\") " pod="metallb-system/speaker-nwj9r" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.644637 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n95sw\" (UniqueName: \"kubernetes.io/projected/23261bd8-2fa1-4f97-851f-85aff45181b8-kube-api-access-n95sw\") pod \"controller-6968d8fdc4-xqkgk\" (UID: \"23261bd8-2fa1-4f97-851f-85aff45181b8\") " pod="metallb-system/controller-6968d8fdc4-xqkgk" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.648113 4713 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.666102 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23261bd8-2fa1-4f97-851f-85aff45181b8-metrics-certs\") pod \"controller-6968d8fdc4-xqkgk\" (UID: \"23261bd8-2fa1-4f97-851f-85aff45181b8\") " pod="metallb-system/controller-6968d8fdc4-xqkgk" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.668675 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blxb8\" (UniqueName: \"kubernetes.io/projected/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-kube-api-access-blxb8\") pod \"speaker-nwj9r\" (UID: \"5d4dd3fb-43d0-46a8-9a41-1122358e82ce\") " pod="metallb-system/speaker-nwj9r" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.668786 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n95sw\" (UniqueName: \"kubernetes.io/projected/23261bd8-2fa1-4f97-851f-85aff45181b8-kube-api-access-n95sw\") pod \"controller-6968d8fdc4-xqkgk\" (UID: \"23261bd8-2fa1-4f97-851f-85aff45181b8\") " pod="metallb-system/controller-6968d8fdc4-xqkgk" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.669852 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23261bd8-2fa1-4f97-851f-85aff45181b8-cert\") pod \"controller-6968d8fdc4-xqkgk\" (UID: \"23261bd8-2fa1-4f97-851f-85aff45181b8\") " pod="metallb-system/controller-6968d8fdc4-xqkgk" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.736687 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-xqkgk" Jan 26 15:51:01 crc kubenswrapper[4713]: I0126 15:51:01.940118 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-xqkgk"] Jan 26 15:51:01 crc kubenswrapper[4713]: W0126 15:51:01.944861 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23261bd8_2fa1_4f97_851f_85aff45181b8.slice/crio-9ede6d4574be211d166fdacfff690a3927a68ef1e2bad266e1466b03f2395518 WatchSource:0}: Error finding container 9ede6d4574be211d166fdacfff690a3927a68ef1e2bad266e1466b03f2395518: Status 404 returned error can't find the container with id 9ede6d4574be211d166fdacfff690a3927a68ef1e2bad266e1466b03f2395518 Jan 26 15:51:02 crc kubenswrapper[4713]: I0126 15:51:02.020604 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-xqkgk" event={"ID":"23261bd8-2fa1-4f97-851f-85aff45181b8","Type":"ContainerStarted","Data":"9ede6d4574be211d166fdacfff690a3927a68ef1e2bad266e1466b03f2395518"} Jan 26 15:51:02 crc kubenswrapper[4713]: I0126 15:51:02.036493 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66jj8" event={"ID":"434ec099-efe3-4f0e-812c-2b684c7f8274","Type":"ContainerStarted","Data":"654dd747ab39ab9618203623fb319977a7cdc31d553d62c664afbf78c37600f0"} Jan 26 15:51:02 crc kubenswrapper[4713]: I0126 15:51:02.083863 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45"] Jan 26 15:51:02 crc kubenswrapper[4713]: I0126 15:51:02.352885 4713 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-dwtw6" Jan 26 15:51:02 crc kubenswrapper[4713]: I0126 15:51:02.618007 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 15:51:02 crc kubenswrapper[4713]: I0126 15:51:02.626923 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-metallb-excludel2\") pod \"speaker-nwj9r\" (UID: \"5d4dd3fb-43d0-46a8-9a41-1122358e82ce\") " pod="metallb-system/speaker-nwj9r" Jan 26 15:51:02 crc kubenswrapper[4713]: E0126 15:51:02.645629 4713 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: failed to sync secret cache: timed out waiting for the condition Jan 26 15:51:02 crc kubenswrapper[4713]: E0126 15:51:02.645717 4713 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: failed to sync secret cache: timed out waiting for the condition Jan 26 15:51:02 crc kubenswrapper[4713]: E0126 15:51:02.645732 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-metrics-certs podName:5d4dd3fb-43d0-46a8-9a41-1122358e82ce nodeName:}" failed. No retries permitted until 2026-01-26 15:51:03.145711141 +0000 UTC m=+1038.282728366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-metrics-certs") pod "speaker-nwj9r" (UID: "5d4dd3fb-43d0-46a8-9a41-1122358e82ce") : failed to sync secret cache: timed out waiting for the condition Jan 26 15:51:02 crc kubenswrapper[4713]: E0126 15:51:02.645903 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-memberlist podName:5d4dd3fb-43d0-46a8-9a41-1122358e82ce nodeName:}" failed. No retries permitted until 2026-01-26 15:51:03.145865865 +0000 UTC m=+1038.282883170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-memberlist") pod "speaker-nwj9r" (UID: "5d4dd3fb-43d0-46a8-9a41-1122358e82ce") : failed to sync secret cache: timed out waiting for the condition Jan 26 15:51:02 crc kubenswrapper[4713]: I0126 15:51:02.838023 4713 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 15:51:02 crc kubenswrapper[4713]: I0126 15:51:02.949011 4713 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 15:51:03 crc kubenswrapper[4713]: I0126 15:51:03.043224 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-xqkgk" event={"ID":"23261bd8-2fa1-4f97-851f-85aff45181b8","Type":"ContainerStarted","Data":"25dffa34aa6c007768effbac84bc17d6e66a5e34971719e8fb299df1c5bf4907"} Jan 26 15:51:03 crc kubenswrapper[4713]: I0126 15:51:03.043273 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-xqkgk" event={"ID":"23261bd8-2fa1-4f97-851f-85aff45181b8","Type":"ContainerStarted","Data":"18b1c25e3926bdfc2b42967af5f615f69c06a682da58c7b79cd7783716ee4c74"} Jan 26 15:51:03 crc kubenswrapper[4713]: I0126 15:51:03.043321 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-xqkgk" Jan 26 15:51:03 crc kubenswrapper[4713]: I0126 15:51:03.044151 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45" event={"ID":"43b745e9-8cc0-4186-bf90-355ce248ab27","Type":"ContainerStarted","Data":"77e4afcbf63343c0717f2cda3c574bce00d5ef09dc9260f559ba472729dfbc24"} Jan 26 15:51:03 crc kubenswrapper[4713]: I0126 15:51:03.061262 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-xqkgk" podStartSLOduration=2.061239746 podStartE2EDuration="2.061239746s" podCreationTimestamp="2026-01-26 15:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:51:03.059215928 +0000 UTC m=+1038.196233163" watchObservedRunningTime="2026-01-26 15:51:03.061239746 +0000 UTC m=+1038.198257001" Jan 26 15:51:03 crc kubenswrapper[4713]: I0126 15:51:03.165401 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-memberlist\") pod \"speaker-nwj9r\" (UID: \"5d4dd3fb-43d0-46a8-9a41-1122358e82ce\") " pod="metallb-system/speaker-nwj9r" Jan 26 15:51:03 crc kubenswrapper[4713]: I0126 15:51:03.165472 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-metrics-certs\") pod \"speaker-nwj9r\" (UID: \"5d4dd3fb-43d0-46a8-9a41-1122358e82ce\") " pod="metallb-system/speaker-nwj9r" Jan 26 15:51:03 crc kubenswrapper[4713]: I0126 15:51:03.171955 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-metrics-certs\") pod \"speaker-nwj9r\" (UID: \"5d4dd3fb-43d0-46a8-9a41-1122358e82ce\") " pod="metallb-system/speaker-nwj9r" Jan 26 15:51:03 crc kubenswrapper[4713]: I0126 15:51:03.175189 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5d4dd3fb-43d0-46a8-9a41-1122358e82ce-memberlist\") pod \"speaker-nwj9r\" (UID: \"5d4dd3fb-43d0-46a8-9a41-1122358e82ce\") " pod="metallb-system/speaker-nwj9r" Jan 26 15:51:03 crc kubenswrapper[4713]: I0126 15:51:03.218868 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nwj9r" Jan 26 15:51:03 crc kubenswrapper[4713]: W0126 15:51:03.241232 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d4dd3fb_43d0_46a8_9a41_1122358e82ce.slice/crio-a0d67a1b86a7c0deebbb9f1cd103bc1d9a65fca613320a649021b0e97cf776b8 WatchSource:0}: Error finding container a0d67a1b86a7c0deebbb9f1cd103bc1d9a65fca613320a649021b0e97cf776b8: Status 404 returned error can't find the container with id a0d67a1b86a7c0deebbb9f1cd103bc1d9a65fca613320a649021b0e97cf776b8 Jan 26 15:51:03 crc kubenswrapper[4713]: I0126 15:51:03.301853 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:51:03 crc kubenswrapper[4713]: I0126 15:51:03.301926 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:51:03 crc kubenswrapper[4713]: I0126 15:51:03.301980 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:51:03 crc kubenswrapper[4713]: I0126 15:51:03.302682 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3174ffab26223a39cf8575650c8eb910e6234e36fda4aca35e1d463b1d024ff"} pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:51:03 crc kubenswrapper[4713]: I0126 15:51:03.302760 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" containerID="cri-o://f3174ffab26223a39cf8575650c8eb910e6234e36fda4aca35e1d463b1d024ff" gracePeriod=600 Jan 26 15:51:04 crc kubenswrapper[4713]: I0126 15:51:04.054465 4713 generic.go:334] "Generic (PLEG): container finished" podID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerID="f3174ffab26223a39cf8575650c8eb910e6234e36fda4aca35e1d463b1d024ff" exitCode=0 Jan 26 15:51:04 crc kubenswrapper[4713]: I0126 15:51:04.054542 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerDied","Data":"f3174ffab26223a39cf8575650c8eb910e6234e36fda4aca35e1d463b1d024ff"} Jan 26 15:51:04 crc kubenswrapper[4713]: I0126 15:51:04.055017 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"90772569024cad074f2b7eff5e4a439736928d25bdd915e9b6f3f6c1f8edbe62"} Jan 26 15:51:04 crc kubenswrapper[4713]: I0126 15:51:04.055039 4713 scope.go:117] "RemoveContainer" containerID="8f32da0ac0a9f06d791f2d1090c2ad8ad38bcf46a578523616f1cb9902d73f6a" Jan 26 15:51:04 crc kubenswrapper[4713]: I0126 15:51:04.059686 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nwj9r" event={"ID":"5d4dd3fb-43d0-46a8-9a41-1122358e82ce","Type":"ContainerStarted","Data":"74cd9c17c257b1219b67482856923fa8bc0fde1072d14af3ca75ab137698ee9c"} Jan 26 15:51:04 crc kubenswrapper[4713]: I0126 15:51:04.059744 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nwj9r" event={"ID":"5d4dd3fb-43d0-46a8-9a41-1122358e82ce","Type":"ContainerStarted","Data":"90762f5155c81ae1874b08643fa8d739e6cb7c58234aeaf27d50f85b460c15cf"} Jan 26 15:51:04 crc kubenswrapper[4713]: I0126 15:51:04.059755 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nwj9r" event={"ID":"5d4dd3fb-43d0-46a8-9a41-1122358e82ce","Type":"ContainerStarted","Data":"a0d67a1b86a7c0deebbb9f1cd103bc1d9a65fca613320a649021b0e97cf776b8"} Jan 26 15:51:04 crc kubenswrapper[4713]: I0126 15:51:04.060013 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-nwj9r" Jan 26 15:51:04 crc kubenswrapper[4713]: I0126 15:51:04.096261 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-nwj9r" podStartSLOduration=3.09624415 podStartE2EDuration="3.09624415s" podCreationTimestamp="2026-01-26 15:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:51:04.094690755 +0000 UTC m=+1039.231707980" watchObservedRunningTime="2026-01-26 15:51:04.09624415 +0000 UTC m=+1039.233261385" Jan 26 15:51:12 crc kubenswrapper[4713]: I0126 15:51:12.123872 4713 generic.go:334] "Generic (PLEG): container finished" podID="434ec099-efe3-4f0e-812c-2b684c7f8274" containerID="48e85532d7069562dfd20f8cd28d471412b1a8a4e47e6a491424c3ca9e640221" exitCode=0 Jan 26 15:51:12 crc kubenswrapper[4713]: I0126 15:51:12.123947 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66jj8" event={"ID":"434ec099-efe3-4f0e-812c-2b684c7f8274","Type":"ContainerDied","Data":"48e85532d7069562dfd20f8cd28d471412b1a8a4e47e6a491424c3ca9e640221"} Jan 26 15:51:12 crc kubenswrapper[4713]: I0126 15:51:12.128076 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45" event={"ID":"43b745e9-8cc0-4186-bf90-355ce248ab27","Type":"ContainerStarted","Data":"5ad3e881ce5b5b185783f97c293ea38bfcb760361394d3de23170e9e4dbd4cf8"} Jan 26 15:51:12 crc kubenswrapper[4713]: I0126 15:51:12.128225 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45" Jan 26 15:51:12 crc kubenswrapper[4713]: I0126 15:51:12.173034 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45" podStartSLOduration=1.727325687 podStartE2EDuration="11.173013719s" podCreationTimestamp="2026-01-26 15:51:01 +0000 UTC" firstStartedPulling="2026-01-26 15:51:02.096907354 +0000 UTC m=+1037.233924619" lastFinishedPulling="2026-01-26 15:51:11.542595416 +0000 UTC m=+1046.679612651" observedRunningTime="2026-01-26 15:51:12.171422094 +0000 UTC m=+1047.308439349" watchObservedRunningTime="2026-01-26 15:51:12.173013719 +0000 UTC m=+1047.310030954" Jan 26 15:51:13 crc kubenswrapper[4713]: I0126 15:51:13.137745 4713 generic.go:334] "Generic (PLEG): container finished" podID="434ec099-efe3-4f0e-812c-2b684c7f8274" containerID="702723f10650a427cff60bec1f371aa3c2294a12b0dc93d38d7246be3b03c98c" exitCode=0 Jan 26 15:51:13 crc kubenswrapper[4713]: I0126 15:51:13.137820 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66jj8" event={"ID":"434ec099-efe3-4f0e-812c-2b684c7f8274","Type":"ContainerDied","Data":"702723f10650a427cff60bec1f371aa3c2294a12b0dc93d38d7246be3b03c98c"} Jan 26 15:51:13 crc kubenswrapper[4713]: I0126 15:51:13.229298 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-nwj9r" Jan 26 15:51:14 crc kubenswrapper[4713]: I0126 15:51:14.146673 4713 generic.go:334] "Generic (PLEG): container finished" podID="434ec099-efe3-4f0e-812c-2b684c7f8274" containerID="aaeb3fb62c335faba15b2c9b5f56d09bb9fa69f8f5a8e172bc64c0efbe7f0385" exitCode=0 Jan 26 15:51:14 crc kubenswrapper[4713]: I0126 15:51:14.146782 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66jj8" event={"ID":"434ec099-efe3-4f0e-812c-2b684c7f8274","Type":"ContainerDied","Data":"aaeb3fb62c335faba15b2c9b5f56d09bb9fa69f8f5a8e172bc64c0efbe7f0385"} Jan 26 15:51:15 crc kubenswrapper[4713]: I0126 15:51:15.157015 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66jj8" event={"ID":"434ec099-efe3-4f0e-812c-2b684c7f8274","Type":"ContainerStarted","Data":"9604a609886694dd091917d2eea4b5542856a99afac93b07cfd7f03456b74330"} Jan 26 15:51:15 crc kubenswrapper[4713]: I0126 15:51:15.157377 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66jj8" event={"ID":"434ec099-efe3-4f0e-812c-2b684c7f8274","Type":"ContainerStarted","Data":"44b8fd62ecaabbce35069f08a10608eb2b3a782fc62687f4a0c4eed1fa5775dc"} Jan 26 15:51:16 crc kubenswrapper[4713]: I0126 15:51:16.166643 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66jj8" event={"ID":"434ec099-efe3-4f0e-812c-2b684c7f8274","Type":"ContainerStarted","Data":"6decf65e1ea34fd9648cc5dd59b26230cf312b79ad001da0e29cc928ac088f1f"} Jan 26 15:51:16 crc kubenswrapper[4713]: I0126 15:51:16.166955 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66jj8" event={"ID":"434ec099-efe3-4f0e-812c-2b684c7f8274","Type":"ContainerStarted","Data":"ec1203894fcdb7ca57cd0bade61f12ed8a26c3d23038673a5a86b2faf4e507a1"} Jan 26 15:51:16 crc kubenswrapper[4713]: I0126 15:51:16.166970 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66jj8" event={"ID":"434ec099-efe3-4f0e-812c-2b684c7f8274","Type":"ContainerStarted","Data":"b73582116526afba96c474b20cb6b3c8db100a423cf4922c1f922272085fc47f"} Jan 26 15:51:16 crc kubenswrapper[4713]: I0126 15:51:16.229706 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kfhdb"] Jan 26 15:51:16 crc kubenswrapper[4713]: I0126 15:51:16.230655 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kfhdb" Jan 26 15:51:16 crc kubenswrapper[4713]: I0126 15:51:16.233017 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-52zwg" Jan 26 15:51:16 crc kubenswrapper[4713]: I0126 15:51:16.233224 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 15:51:16 crc kubenswrapper[4713]: I0126 15:51:16.233618 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 15:51:16 crc kubenswrapper[4713]: I0126 15:51:16.284852 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kfhdb"] Jan 26 15:51:16 crc kubenswrapper[4713]: I0126 15:51:16.317092 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq8bh\" (UniqueName: \"kubernetes.io/projected/66e93968-81f1-48f1-b65a-5c8b750c3d46-kube-api-access-kq8bh\") pod \"openstack-operator-index-kfhdb\" (UID: \"66e93968-81f1-48f1-b65a-5c8b750c3d46\") " pod="openstack-operators/openstack-operator-index-kfhdb" Jan 26 15:51:16 crc kubenswrapper[4713]: I0126 15:51:16.418412 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq8bh\" (UniqueName: \"kubernetes.io/projected/66e93968-81f1-48f1-b65a-5c8b750c3d46-kube-api-access-kq8bh\") pod \"openstack-operator-index-kfhdb\" (UID: \"66e93968-81f1-48f1-b65a-5c8b750c3d46\") " pod="openstack-operators/openstack-operator-index-kfhdb" Jan 26 15:51:16 crc kubenswrapper[4713]: I0126 15:51:16.437118 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq8bh\" (UniqueName: \"kubernetes.io/projected/66e93968-81f1-48f1-b65a-5c8b750c3d46-kube-api-access-kq8bh\") pod \"openstack-operator-index-kfhdb\" (UID: \"66e93968-81f1-48f1-b65a-5c8b750c3d46\") " pod="openstack-operators/openstack-operator-index-kfhdb" Jan 26 15:51:16 crc kubenswrapper[4713]: I0126 15:51:16.596092 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kfhdb" Jan 26 15:51:17 crc kubenswrapper[4713]: I0126 15:51:17.034792 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kfhdb"] Jan 26 15:51:17 crc kubenswrapper[4713]: I0126 15:51:17.038649 4713 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:51:17 crc kubenswrapper[4713]: I0126 15:51:17.175189 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kfhdb" event={"ID":"66e93968-81f1-48f1-b65a-5c8b750c3d46","Type":"ContainerStarted","Data":"5cefc3263f63511c09ed69c20fcaaf57284e2a1998393741f07bd23c6997d6ae"} Jan 26 15:51:17 crc kubenswrapper[4713]: I0126 15:51:17.181139 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66jj8" event={"ID":"434ec099-efe3-4f0e-812c-2b684c7f8274","Type":"ContainerStarted","Data":"a475dbe61809cfff109dda61b3a870283fb5717450a87e8fa8561ccad2e9858e"} Jan 26 15:51:17 crc kubenswrapper[4713]: I0126 15:51:17.181343 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:17 crc kubenswrapper[4713]: I0126 15:51:17.203832 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-66jj8" podStartSLOduration=6.500145786 podStartE2EDuration="16.203811185s" podCreationTimestamp="2026-01-26 15:51:01 +0000 UTC" firstStartedPulling="2026-01-26 15:51:01.822886976 +0000 UTC m=+1036.959904211" lastFinishedPulling="2026-01-26 15:51:11.526552375 +0000 UTC m=+1046.663569610" observedRunningTime="2026-01-26 15:51:17.201009424 +0000 UTC m=+1052.338026679" watchObservedRunningTime="2026-01-26 15:51:17.203811185 +0000 UTC m=+1052.340828420" Jan 26 15:51:19 crc kubenswrapper[4713]: I0126 15:51:19.611067 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kfhdb"] Jan 26 15:51:20 crc kubenswrapper[4713]: I0126 15:51:20.216452 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-k8jhj"] Jan 26 15:51:20 crc kubenswrapper[4713]: I0126 15:51:20.217434 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k8jhj" Jan 26 15:51:20 crc kubenswrapper[4713]: I0126 15:51:20.225934 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k8jhj"] Jan 26 15:51:20 crc kubenswrapper[4713]: I0126 15:51:20.273326 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rwcz\" (UniqueName: \"kubernetes.io/projected/b2c70989-5c15-405f-b07d-4ae1a6160f6a-kube-api-access-8rwcz\") pod \"openstack-operator-index-k8jhj\" (UID: \"b2c70989-5c15-405f-b07d-4ae1a6160f6a\") " pod="openstack-operators/openstack-operator-index-k8jhj" Jan 26 15:51:20 crc kubenswrapper[4713]: I0126 15:51:20.376754 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rwcz\" (UniqueName: \"kubernetes.io/projected/b2c70989-5c15-405f-b07d-4ae1a6160f6a-kube-api-access-8rwcz\") pod \"openstack-operator-index-k8jhj\" (UID: \"b2c70989-5c15-405f-b07d-4ae1a6160f6a\") " pod="openstack-operators/openstack-operator-index-k8jhj" Jan 26 15:51:20 crc kubenswrapper[4713]: I0126 15:51:20.407555 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rwcz\" (UniqueName: \"kubernetes.io/projected/b2c70989-5c15-405f-b07d-4ae1a6160f6a-kube-api-access-8rwcz\") pod \"openstack-operator-index-k8jhj\" (UID: \"b2c70989-5c15-405f-b07d-4ae1a6160f6a\") " pod="openstack-operators/openstack-operator-index-k8jhj" Jan 26 15:51:20 crc kubenswrapper[4713]: I0126 15:51:20.542775 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k8jhj" Jan 26 15:51:21 crc kubenswrapper[4713]: I0126 15:51:21.608824 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:21 crc kubenswrapper[4713]: I0126 15:51:21.626848 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t4l45" Jan 26 15:51:21 crc kubenswrapper[4713]: I0126 15:51:21.672813 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:21 crc kubenswrapper[4713]: I0126 15:51:21.742451 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-xqkgk" Jan 26 15:51:25 crc kubenswrapper[4713]: I0126 15:51:25.194731 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k8jhj"] Jan 26 15:51:25 crc kubenswrapper[4713]: I0126 15:51:25.243561 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kfhdb" event={"ID":"66e93968-81f1-48f1-b65a-5c8b750c3d46","Type":"ContainerStarted","Data":"09b3d72ad867628a426ed125f628562de0afea8acb090e4e678302d3f08558aa"} Jan 26 15:51:25 crc kubenswrapper[4713]: I0126 15:51:25.243566 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-kfhdb" podUID="66e93968-81f1-48f1-b65a-5c8b750c3d46" containerName="registry-server" containerID="cri-o://09b3d72ad867628a426ed125f628562de0afea8acb090e4e678302d3f08558aa" gracePeriod=2 Jan 26 15:51:25 crc kubenswrapper[4713]: I0126 15:51:25.245629 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k8jhj" event={"ID":"b2c70989-5c15-405f-b07d-4ae1a6160f6a","Type":"ContainerStarted","Data":"4b35ee677979132c2d8f3d043267e4b7a43e7363a7f782cb7c11e32f48f8e966"} Jan 26 15:51:25 crc kubenswrapper[4713]: I0126 15:51:25.263763 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kfhdb" podStartSLOduration=1.462273495 podStartE2EDuration="9.263744899s" podCreationTimestamp="2026-01-26 15:51:16 +0000 UTC" firstStartedPulling="2026-01-26 15:51:17.03842645 +0000 UTC m=+1052.175443685" lastFinishedPulling="2026-01-26 15:51:24.839897854 +0000 UTC m=+1059.976915089" observedRunningTime="2026-01-26 15:51:25.262562795 +0000 UTC m=+1060.399580040" watchObservedRunningTime="2026-01-26 15:51:25.263744899 +0000 UTC m=+1060.400762134" Jan 26 15:51:25 crc kubenswrapper[4713]: I0126 15:51:25.587795 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kfhdb" Jan 26 15:51:25 crc kubenswrapper[4713]: I0126 15:51:25.654297 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq8bh\" (UniqueName: \"kubernetes.io/projected/66e93968-81f1-48f1-b65a-5c8b750c3d46-kube-api-access-kq8bh\") pod \"66e93968-81f1-48f1-b65a-5c8b750c3d46\" (UID: \"66e93968-81f1-48f1-b65a-5c8b750c3d46\") " Jan 26 15:51:25 crc kubenswrapper[4713]: I0126 15:51:25.658685 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66e93968-81f1-48f1-b65a-5c8b750c3d46-kube-api-access-kq8bh" (OuterVolumeSpecName: "kube-api-access-kq8bh") pod "66e93968-81f1-48f1-b65a-5c8b750c3d46" (UID: "66e93968-81f1-48f1-b65a-5c8b750c3d46"). InnerVolumeSpecName "kube-api-access-kq8bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:51:25 crc kubenswrapper[4713]: I0126 15:51:25.756715 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq8bh\" (UniqueName: \"kubernetes.io/projected/66e93968-81f1-48f1-b65a-5c8b750c3d46-kube-api-access-kq8bh\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:26 crc kubenswrapper[4713]: I0126 15:51:26.254754 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k8jhj" event={"ID":"b2c70989-5c15-405f-b07d-4ae1a6160f6a","Type":"ContainerStarted","Data":"36f4c41619cfe26a5972d3a166ddd15da5d5af068cb8e274204a9471f937eb96"} Jan 26 15:51:26 crc kubenswrapper[4713]: I0126 15:51:26.256261 4713 generic.go:334] "Generic (PLEG): container finished" podID="66e93968-81f1-48f1-b65a-5c8b750c3d46" containerID="09b3d72ad867628a426ed125f628562de0afea8acb090e4e678302d3f08558aa" exitCode=0 Jan 26 15:51:26 crc kubenswrapper[4713]: I0126 15:51:26.256295 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kfhdb" event={"ID":"66e93968-81f1-48f1-b65a-5c8b750c3d46","Type":"ContainerDied","Data":"09b3d72ad867628a426ed125f628562de0afea8acb090e4e678302d3f08558aa"} Jan 26 15:51:26 crc kubenswrapper[4713]: I0126 15:51:26.256309 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kfhdb" event={"ID":"66e93968-81f1-48f1-b65a-5c8b750c3d46","Type":"ContainerDied","Data":"5cefc3263f63511c09ed69c20fcaaf57284e2a1998393741f07bd23c6997d6ae"} Jan 26 15:51:26 crc kubenswrapper[4713]: I0126 15:51:26.256327 4713 scope.go:117] "RemoveContainer" containerID="09b3d72ad867628a426ed125f628562de0afea8acb090e4e678302d3f08558aa" Jan 26 15:51:26 crc kubenswrapper[4713]: I0126 15:51:26.256446 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kfhdb" Jan 26 15:51:26 crc kubenswrapper[4713]: I0126 15:51:26.275212 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-k8jhj" podStartSLOduration=6.202556898 podStartE2EDuration="6.275194606s" podCreationTimestamp="2026-01-26 15:51:20 +0000 UTC" firstStartedPulling="2026-01-26 15:51:25.204139796 +0000 UTC m=+1060.341157031" lastFinishedPulling="2026-01-26 15:51:25.276777494 +0000 UTC m=+1060.413794739" observedRunningTime="2026-01-26 15:51:26.273396034 +0000 UTC m=+1061.410413269" watchObservedRunningTime="2026-01-26 15:51:26.275194606 +0000 UTC m=+1061.412211841" Jan 26 15:51:26 crc kubenswrapper[4713]: I0126 15:51:26.275317 4713 scope.go:117] "RemoveContainer" containerID="09b3d72ad867628a426ed125f628562de0afea8acb090e4e678302d3f08558aa" Jan 26 15:51:26 crc kubenswrapper[4713]: E0126 15:51:26.275758 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09b3d72ad867628a426ed125f628562de0afea8acb090e4e678302d3f08558aa\": container with ID starting with 09b3d72ad867628a426ed125f628562de0afea8acb090e4e678302d3f08558aa not found: ID does not exist" containerID="09b3d72ad867628a426ed125f628562de0afea8acb090e4e678302d3f08558aa" Jan 26 15:51:26 crc kubenswrapper[4713]: I0126 15:51:26.275831 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09b3d72ad867628a426ed125f628562de0afea8acb090e4e678302d3f08558aa"} err="failed to get container status \"09b3d72ad867628a426ed125f628562de0afea8acb090e4e678302d3f08558aa\": rpc error: code = NotFound desc = could not find container \"09b3d72ad867628a426ed125f628562de0afea8acb090e4e678302d3f08558aa\": container with ID starting with 09b3d72ad867628a426ed125f628562de0afea8acb090e4e678302d3f08558aa not found: ID does not exist" Jan 26 15:51:26 crc kubenswrapper[4713]: I0126 15:51:26.290830 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kfhdb"] Jan 26 15:51:26 crc kubenswrapper[4713]: I0126 15:51:26.295087 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-kfhdb"] Jan 26 15:51:27 crc kubenswrapper[4713]: I0126 15:51:27.815619 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66e93968-81f1-48f1-b65a-5c8b750c3d46" path="/var/lib/kubelet/pods/66e93968-81f1-48f1-b65a-5c8b750c3d46/volumes" Jan 26 15:51:30 crc kubenswrapper[4713]: I0126 15:51:30.543825 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-k8jhj" Jan 26 15:51:30 crc kubenswrapper[4713]: I0126 15:51:30.544933 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-k8jhj" Jan 26 15:51:30 crc kubenswrapper[4713]: I0126 15:51:30.587025 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-k8jhj" Jan 26 15:51:31 crc kubenswrapper[4713]: I0126 15:51:31.328574 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-k8jhj" Jan 26 15:51:31 crc kubenswrapper[4713]: I0126 15:51:31.611177 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-66jj8" Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.512433 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp"] Jan 26 15:51:36 crc kubenswrapper[4713]: E0126 15:51:36.513230 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e93968-81f1-48f1-b65a-5c8b750c3d46" containerName="registry-server" Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.513253 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e93968-81f1-48f1-b65a-5c8b750c3d46" containerName="registry-server" Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.513744 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e93968-81f1-48f1-b65a-5c8b750c3d46" containerName="registry-server" Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.515632 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.517670 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-4knmp" Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.522493 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp"] Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.578856 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4dc2\" (UniqueName: \"kubernetes.io/projected/db0b8456-060e-49fe-bbe7-12d695b3a3dc-kube-api-access-p4dc2\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp\" (UID: \"db0b8456-060e-49fe-bbe7-12d695b3a3dc\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.579177 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/db0b8456-060e-49fe-bbe7-12d695b3a3dc-util\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp\" (UID: \"db0b8456-060e-49fe-bbe7-12d695b3a3dc\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.579202 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/db0b8456-060e-49fe-bbe7-12d695b3a3dc-bundle\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp\" (UID: \"db0b8456-060e-49fe-bbe7-12d695b3a3dc\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.680038 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4dc2\" (UniqueName: \"kubernetes.io/projected/db0b8456-060e-49fe-bbe7-12d695b3a3dc-kube-api-access-p4dc2\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp\" (UID: \"db0b8456-060e-49fe-bbe7-12d695b3a3dc\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.680510 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/db0b8456-060e-49fe-bbe7-12d695b3a3dc-util\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp\" (UID: \"db0b8456-060e-49fe-bbe7-12d695b3a3dc\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.681149 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/db0b8456-060e-49fe-bbe7-12d695b3a3dc-bundle\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp\" (UID: \"db0b8456-060e-49fe-bbe7-12d695b3a3dc\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.682519 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/db0b8456-060e-49fe-bbe7-12d695b3a3dc-util\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp\" (UID: \"db0b8456-060e-49fe-bbe7-12d695b3a3dc\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.682575 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/db0b8456-060e-49fe-bbe7-12d695b3a3dc-bundle\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp\" (UID: \"db0b8456-060e-49fe-bbe7-12d695b3a3dc\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.700079 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4dc2\" (UniqueName: \"kubernetes.io/projected/db0b8456-060e-49fe-bbe7-12d695b3a3dc-kube-api-access-p4dc2\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp\" (UID: \"db0b8456-060e-49fe-bbe7-12d695b3a3dc\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" Jan 26 15:51:36 crc kubenswrapper[4713]: I0126 15:51:36.844199 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" Jan 26 15:51:37 crc kubenswrapper[4713]: I0126 15:51:37.067453 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp"] Jan 26 15:51:37 crc kubenswrapper[4713]: W0126 15:51:37.080485 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb0b8456_060e_49fe_bbe7_12d695b3a3dc.slice/crio-bf6f9a3357537d586a4daeacfeb16698dba04331ecd24cfd6809cc31968a9022 WatchSource:0}: Error finding container bf6f9a3357537d586a4daeacfeb16698dba04331ecd24cfd6809cc31968a9022: Status 404 returned error can't find the container with id bf6f9a3357537d586a4daeacfeb16698dba04331ecd24cfd6809cc31968a9022 Jan 26 15:51:37 crc kubenswrapper[4713]: I0126 15:51:37.389296 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" event={"ID":"db0b8456-060e-49fe-bbe7-12d695b3a3dc","Type":"ContainerStarted","Data":"e25cc4d90e87602e898f4a3aa3dc760e2c42643f512233389cf202c9d0e1406f"} Jan 26 15:51:37 crc kubenswrapper[4713]: I0126 15:51:37.389340 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" event={"ID":"db0b8456-060e-49fe-bbe7-12d695b3a3dc","Type":"ContainerStarted","Data":"bf6f9a3357537d586a4daeacfeb16698dba04331ecd24cfd6809cc31968a9022"} Jan 26 15:51:38 crc kubenswrapper[4713]: I0126 15:51:38.402901 4713 generic.go:334] "Generic (PLEG): container finished" podID="db0b8456-060e-49fe-bbe7-12d695b3a3dc" containerID="e25cc4d90e87602e898f4a3aa3dc760e2c42643f512233389cf202c9d0e1406f" exitCode=0 Jan 26 15:51:38 crc kubenswrapper[4713]: I0126 15:51:38.402962 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" event={"ID":"db0b8456-060e-49fe-bbe7-12d695b3a3dc","Type":"ContainerDied","Data":"e25cc4d90e87602e898f4a3aa3dc760e2c42643f512233389cf202c9d0e1406f"} Jan 26 15:51:39 crc kubenswrapper[4713]: I0126 15:51:39.411143 4713 generic.go:334] "Generic (PLEG): container finished" podID="db0b8456-060e-49fe-bbe7-12d695b3a3dc" containerID="4d3a9de11f39be2c51eee9b1937663803147b245ed09ec1c2c87487b97081268" exitCode=0 Jan 26 15:51:39 crc kubenswrapper[4713]: I0126 15:51:39.411263 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" event={"ID":"db0b8456-060e-49fe-bbe7-12d695b3a3dc","Type":"ContainerDied","Data":"4d3a9de11f39be2c51eee9b1937663803147b245ed09ec1c2c87487b97081268"} Jan 26 15:51:40 crc kubenswrapper[4713]: I0126 15:51:40.426788 4713 generic.go:334] "Generic (PLEG): container finished" podID="db0b8456-060e-49fe-bbe7-12d695b3a3dc" containerID="b580d6a4354c3fe67502f773af823a59e3c1641dfc7b374a6927db311c6aa751" exitCode=0 Jan 26 15:51:40 crc kubenswrapper[4713]: I0126 15:51:40.426848 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" event={"ID":"db0b8456-060e-49fe-bbe7-12d695b3a3dc","Type":"ContainerDied","Data":"b580d6a4354c3fe67502f773af823a59e3c1641dfc7b374a6927db311c6aa751"} Jan 26 15:51:41 crc kubenswrapper[4713]: I0126 15:51:41.746774 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" Jan 26 15:51:41 crc kubenswrapper[4713]: I0126 15:51:41.854000 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4dc2\" (UniqueName: \"kubernetes.io/projected/db0b8456-060e-49fe-bbe7-12d695b3a3dc-kube-api-access-p4dc2\") pod \"db0b8456-060e-49fe-bbe7-12d695b3a3dc\" (UID: \"db0b8456-060e-49fe-bbe7-12d695b3a3dc\") " Jan 26 15:51:41 crc kubenswrapper[4713]: I0126 15:51:41.854110 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/db0b8456-060e-49fe-bbe7-12d695b3a3dc-bundle\") pod \"db0b8456-060e-49fe-bbe7-12d695b3a3dc\" (UID: \"db0b8456-060e-49fe-bbe7-12d695b3a3dc\") " Jan 26 15:51:41 crc kubenswrapper[4713]: I0126 15:51:41.854136 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/db0b8456-060e-49fe-bbe7-12d695b3a3dc-util\") pod \"db0b8456-060e-49fe-bbe7-12d695b3a3dc\" (UID: \"db0b8456-060e-49fe-bbe7-12d695b3a3dc\") " Jan 26 15:51:41 crc kubenswrapper[4713]: I0126 15:51:41.854981 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db0b8456-060e-49fe-bbe7-12d695b3a3dc-bundle" (OuterVolumeSpecName: "bundle") pod "db0b8456-060e-49fe-bbe7-12d695b3a3dc" (UID: "db0b8456-060e-49fe-bbe7-12d695b3a3dc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:51:41 crc kubenswrapper[4713]: I0126 15:51:41.859899 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db0b8456-060e-49fe-bbe7-12d695b3a3dc-kube-api-access-p4dc2" (OuterVolumeSpecName: "kube-api-access-p4dc2") pod "db0b8456-060e-49fe-bbe7-12d695b3a3dc" (UID: "db0b8456-060e-49fe-bbe7-12d695b3a3dc"). InnerVolumeSpecName "kube-api-access-p4dc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:51:41 crc kubenswrapper[4713]: I0126 15:51:41.867766 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db0b8456-060e-49fe-bbe7-12d695b3a3dc-util" (OuterVolumeSpecName: "util") pod "db0b8456-060e-49fe-bbe7-12d695b3a3dc" (UID: "db0b8456-060e-49fe-bbe7-12d695b3a3dc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:51:41 crc kubenswrapper[4713]: I0126 15:51:41.956262 4713 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/db0b8456-060e-49fe-bbe7-12d695b3a3dc-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:41 crc kubenswrapper[4713]: I0126 15:51:41.956312 4713 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/db0b8456-060e-49fe-bbe7-12d695b3a3dc-util\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:41 crc kubenswrapper[4713]: I0126 15:51:41.956332 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4dc2\" (UniqueName: \"kubernetes.io/projected/db0b8456-060e-49fe-bbe7-12d695b3a3dc-kube-api-access-p4dc2\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:42 crc kubenswrapper[4713]: I0126 15:51:42.446244 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" event={"ID":"db0b8456-060e-49fe-bbe7-12d695b3a3dc","Type":"ContainerDied","Data":"bf6f9a3357537d586a4daeacfeb16698dba04331ecd24cfd6809cc31968a9022"} Jan 26 15:51:42 crc kubenswrapper[4713]: I0126 15:51:42.446499 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf6f9a3357537d586a4daeacfeb16698dba04331ecd24cfd6809cc31968a9022" Jan 26 15:51:42 crc kubenswrapper[4713]: I0126 15:51:42.446289 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp" Jan 26 15:51:47 crc kubenswrapper[4713]: I0126 15:51:47.860895 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-8f6df5568-zmrnb"] Jan 26 15:51:47 crc kubenswrapper[4713]: E0126 15:51:47.861668 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db0b8456-060e-49fe-bbe7-12d695b3a3dc" containerName="extract" Jan 26 15:51:47 crc kubenswrapper[4713]: I0126 15:51:47.861683 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="db0b8456-060e-49fe-bbe7-12d695b3a3dc" containerName="extract" Jan 26 15:51:47 crc kubenswrapper[4713]: E0126 15:51:47.861696 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db0b8456-060e-49fe-bbe7-12d695b3a3dc" containerName="util" Jan 26 15:51:47 crc kubenswrapper[4713]: I0126 15:51:47.861703 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="db0b8456-060e-49fe-bbe7-12d695b3a3dc" containerName="util" Jan 26 15:51:47 crc kubenswrapper[4713]: E0126 15:51:47.861716 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db0b8456-060e-49fe-bbe7-12d695b3a3dc" containerName="pull" Jan 26 15:51:47 crc kubenswrapper[4713]: I0126 15:51:47.861722 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="db0b8456-060e-49fe-bbe7-12d695b3a3dc" containerName="pull" Jan 26 15:51:47 crc kubenswrapper[4713]: I0126 15:51:47.861867 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="db0b8456-060e-49fe-bbe7-12d695b3a3dc" containerName="extract" Jan 26 15:51:47 crc kubenswrapper[4713]: I0126 15:51:47.862501 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-zmrnb" Jan 26 15:51:47 crc kubenswrapper[4713]: I0126 15:51:47.864731 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-lcmng" Jan 26 15:51:47 crc kubenswrapper[4713]: I0126 15:51:47.886669 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-8f6df5568-zmrnb"] Jan 26 15:51:48 crc kubenswrapper[4713]: I0126 15:51:48.052502 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hqwd\" (UniqueName: \"kubernetes.io/projected/b41b6a3b-8d2a-4213-a114-f84a4ca574c0-kube-api-access-4hqwd\") pod \"openstack-operator-controller-init-8f6df5568-zmrnb\" (UID: \"b41b6a3b-8d2a-4213-a114-f84a4ca574c0\") " pod="openstack-operators/openstack-operator-controller-init-8f6df5568-zmrnb" Jan 26 15:51:48 crc kubenswrapper[4713]: I0126 15:51:48.153748 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hqwd\" (UniqueName: \"kubernetes.io/projected/b41b6a3b-8d2a-4213-a114-f84a4ca574c0-kube-api-access-4hqwd\") pod \"openstack-operator-controller-init-8f6df5568-zmrnb\" (UID: \"b41b6a3b-8d2a-4213-a114-f84a4ca574c0\") " pod="openstack-operators/openstack-operator-controller-init-8f6df5568-zmrnb" Jan 26 15:51:48 crc kubenswrapper[4713]: I0126 15:51:48.173238 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hqwd\" (UniqueName: \"kubernetes.io/projected/b41b6a3b-8d2a-4213-a114-f84a4ca574c0-kube-api-access-4hqwd\") pod \"openstack-operator-controller-init-8f6df5568-zmrnb\" (UID: \"b41b6a3b-8d2a-4213-a114-f84a4ca574c0\") " pod="openstack-operators/openstack-operator-controller-init-8f6df5568-zmrnb" Jan 26 15:51:48 crc kubenswrapper[4713]: I0126 15:51:48.198006 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-zmrnb" Jan 26 15:51:48 crc kubenswrapper[4713]: I0126 15:51:48.692695 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-8f6df5568-zmrnb"] Jan 26 15:51:49 crc kubenswrapper[4713]: I0126 15:51:49.497707 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-zmrnb" event={"ID":"b41b6a3b-8d2a-4213-a114-f84a4ca574c0","Type":"ContainerStarted","Data":"1cd770c7a4f18192151354e4e9d95eb90790003fd0639ed35ba09a900225127d"} Jan 26 15:51:55 crc kubenswrapper[4713]: I0126 15:51:55.537329 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-zmrnb" event={"ID":"b41b6a3b-8d2a-4213-a114-f84a4ca574c0","Type":"ContainerStarted","Data":"42e5cfe1054931bf4f7e9d582dc3d8af1f6ecda84f925ed75d59c81c8cd8f8ec"} Jan 26 15:51:55 crc kubenswrapper[4713]: I0126 15:51:55.538772 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-zmrnb" Jan 26 15:51:55 crc kubenswrapper[4713]: I0126 15:51:55.569739 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-zmrnb" podStartSLOduration=2.754685514 podStartE2EDuration="8.569720032s" podCreationTimestamp="2026-01-26 15:51:47 +0000 UTC" firstStartedPulling="2026-01-26 15:51:48.698602933 +0000 UTC m=+1083.835620168" lastFinishedPulling="2026-01-26 15:51:54.513637441 +0000 UTC m=+1089.650654686" observedRunningTime="2026-01-26 15:51:55.564923634 +0000 UTC m=+1090.701940889" watchObservedRunningTime="2026-01-26 15:51:55.569720032 +0000 UTC m=+1090.706737267" Jan 26 15:52:08 crc kubenswrapper[4713]: I0126 15:52:08.202254 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-zmrnb" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.719193 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-8sbhh"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.720701 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8sbhh" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.723819 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-xc724" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.737286 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-fzsfn"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.738405 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-fzsfn" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.741191 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-sjgm6" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.749988 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-8sbhh"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.758470 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-fzsfn"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.782715 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-cqv2q"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.783842 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cqv2q" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.786119 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-xn2dm" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.812049 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-gjzk8"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.812780 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gjzk8" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.828640 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-p658s" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.838902 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-gjzk8"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.849453 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9h27"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.850382 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsct8\" (UniqueName: \"kubernetes.io/projected/fed44574-f4a7-42df-9179-b2f8a64d180e-kube-api-access-qsct8\") pod \"glance-operator-controller-manager-78fdd796fd-gjzk8\" (UID: \"fed44574-f4a7-42df-9179-b2f8a64d180e\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gjzk8" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.850432 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4j95\" (UniqueName: \"kubernetes.io/projected/6d87a00d-b4a5-449e-b744-d9680cbba82e-kube-api-access-w4j95\") pod \"barbican-operator-controller-manager-7f86f8796f-8sbhh\" (UID: \"6d87a00d-b4a5-449e-b744-d9680cbba82e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8sbhh" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.850465 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqk5x\" (UniqueName: \"kubernetes.io/projected/ad9b077e-c81e-4cf5-bc8d-c7405e7b25c4-kube-api-access-pqk5x\") pod \"cinder-operator-controller-manager-7478f7dbf9-fzsfn\" (UID: \"ad9b077e-c81e-4cf5-bc8d-c7405e7b25c4\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-fzsfn" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.850525 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jnxg\" (UniqueName: \"kubernetes.io/projected/6fab0ebc-dfbb-45f5-9802-5cf0145acf7b-kube-api-access-9jnxg\") pod \"designate-operator-controller-manager-b45d7bf98-cqv2q\" (UID: \"6fab0ebc-dfbb-45f5-9802-5cf0145acf7b\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cqv2q" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.851444 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-cqv2q"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.851586 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9h27" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.858756 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7hgnh"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.865214 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7hgnh" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.871890 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-bv6jh" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.872277 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9h27"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.877432 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7hgnh"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.877741 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-klg6b" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.893697 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.894561 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.905673 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5dlcz" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.908875 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.915416 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-pnkxb"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.916243 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pnkxb" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.921580 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.925428 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-pnkxb"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.967613 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-6kns8"] Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.969141 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6kns8" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.972644 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-cl8rh" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.976291 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-954l4" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.977521 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jnxg\" (UniqueName: \"kubernetes.io/projected/6fab0ebc-dfbb-45f5-9802-5cf0145acf7b-kube-api-access-9jnxg\") pod \"designate-operator-controller-manager-b45d7bf98-cqv2q\" (UID: \"6fab0ebc-dfbb-45f5-9802-5cf0145acf7b\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cqv2q" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.977568 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsct8\" (UniqueName: \"kubernetes.io/projected/fed44574-f4a7-42df-9179-b2f8a64d180e-kube-api-access-qsct8\") pod \"glance-operator-controller-manager-78fdd796fd-gjzk8\" (UID: \"fed44574-f4a7-42df-9179-b2f8a64d180e\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gjzk8" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.977601 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4j95\" (UniqueName: \"kubernetes.io/projected/6d87a00d-b4a5-449e-b744-d9680cbba82e-kube-api-access-w4j95\") pod \"barbican-operator-controller-manager-7f86f8796f-8sbhh\" (UID: \"6d87a00d-b4a5-449e-b744-d9680cbba82e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8sbhh" Jan 26 15:52:27 crc kubenswrapper[4713]: I0126 15:52:27.977634 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqk5x\" (UniqueName: \"kubernetes.io/projected/ad9b077e-c81e-4cf5-bc8d-c7405e7b25c4-kube-api-access-pqk5x\") pod \"cinder-operator-controller-manager-7478f7dbf9-fzsfn\" (UID: \"ad9b077e-c81e-4cf5-bc8d-c7405e7b25c4\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-fzsfn" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.016965 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.017948 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.025743 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-rvxn4"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.026956 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-rvxn4" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.031668 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-vglhl" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.031907 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-d8f6s" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.036479 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jnxg\" (UniqueName: \"kubernetes.io/projected/6fab0ebc-dfbb-45f5-9802-5cf0145acf7b-kube-api-access-9jnxg\") pod \"designate-operator-controller-manager-b45d7bf98-cqv2q\" (UID: \"6fab0ebc-dfbb-45f5-9802-5cf0145acf7b\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cqv2q" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.041478 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqk5x\" (UniqueName: \"kubernetes.io/projected/ad9b077e-c81e-4cf5-bc8d-c7405e7b25c4-kube-api-access-pqk5x\") pod \"cinder-operator-controller-manager-7478f7dbf9-fzsfn\" (UID: \"ad9b077e-c81e-4cf5-bc8d-c7405e7b25c4\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-fzsfn" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.057429 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-6kns8"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.059141 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsct8\" (UniqueName: \"kubernetes.io/projected/fed44574-f4a7-42df-9179-b2f8a64d180e-kube-api-access-qsct8\") pod \"glance-operator-controller-manager-78fdd796fd-gjzk8\" (UID: \"fed44574-f4a7-42df-9179-b2f8a64d180e\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gjzk8" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.063223 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4j95\" (UniqueName: \"kubernetes.io/projected/6d87a00d-b4a5-449e-b744-d9680cbba82e-kube-api-access-w4j95\") pod \"barbican-operator-controller-manager-7f86f8796f-8sbhh\" (UID: \"6d87a00d-b4a5-449e-b744-d9680cbba82e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8sbhh" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.063582 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-fzsfn" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.080209 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4zkl\" (UniqueName: \"kubernetes.io/projected/21c903f2-40b2-420b-830c-64298a2a77bb-kube-api-access-j4zkl\") pod \"ironic-operator-controller-manager-598f7747c9-pnkxb\" (UID: \"21c903f2-40b2-420b-830c-64298a2a77bb\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pnkxb" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.080591 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sjsk\" (UniqueName: \"kubernetes.io/projected/d4485006-069c-45c8-8515-ff65913e2d54-kube-api-access-4sjsk\") pod \"infra-operator-controller-manager-694cf4f878-rgk5d\" (UID: \"d4485006-069c-45c8-8515-ff65913e2d54\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.080634 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert\") pod \"infra-operator-controller-manager-694cf4f878-rgk5d\" (UID: \"d4485006-069c-45c8-8515-ff65913e2d54\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.080671 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5cjq\" (UniqueName: \"kubernetes.io/projected/c967ecd2-cf7b-428e-8e86-320c481901fd-kube-api-access-m5cjq\") pod \"heat-operator-controller-manager-594c8c9d5d-q9h27\" (UID: \"c967ecd2-cf7b-428e-8e86-320c481901fd\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9h27" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.080688 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbqzc\" (UniqueName: \"kubernetes.io/projected/83ccbdec-a448-4674-896e-9c634981df65-kube-api-access-wbqzc\") pod \"keystone-operator-controller-manager-b8b6d4659-6kns8\" (UID: \"83ccbdec-a448-4674-896e-9c634981df65\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6kns8" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.080717 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct4sc\" (UniqueName: \"kubernetes.io/projected/a4e0ef5f-5c6e-4ceb-80c2-25769c178450-kube-api-access-ct4sc\") pod \"horizon-operator-controller-manager-77d5c5b54f-7hgnh\" (UID: \"a4e0ef5f-5c6e-4ceb-80c2-25769c178450\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7hgnh" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.095051 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.103410 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cqv2q" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.116422 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-lnw6c"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.117315 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lnw6c" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.124765 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-78r9q" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.136427 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-rvxn4"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.140580 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-lnw6c"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.147860 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gjzk8" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.168027 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.168983 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.176747 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-7pw9m" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.187094 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4zkl\" (UniqueName: \"kubernetes.io/projected/21c903f2-40b2-420b-830c-64298a2a77bb-kube-api-access-j4zkl\") pod \"ironic-operator-controller-manager-598f7747c9-pnkxb\" (UID: \"21c903f2-40b2-420b-830c-64298a2a77bb\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pnkxb" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.187174 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sjsk\" (UniqueName: \"kubernetes.io/projected/d4485006-069c-45c8-8515-ff65913e2d54-kube-api-access-4sjsk\") pod \"infra-operator-controller-manager-694cf4f878-rgk5d\" (UID: \"d4485006-069c-45c8-8515-ff65913e2d54\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.187230 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert\") pod \"infra-operator-controller-manager-694cf4f878-rgk5d\" (UID: \"d4485006-069c-45c8-8515-ff65913e2d54\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.187268 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5cjq\" (UniqueName: \"kubernetes.io/projected/c967ecd2-cf7b-428e-8e86-320c481901fd-kube-api-access-m5cjq\") pod \"heat-operator-controller-manager-594c8c9d5d-q9h27\" (UID: \"c967ecd2-cf7b-428e-8e86-320c481901fd\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9h27" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.187289 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq7nc\" (UniqueName: \"kubernetes.io/projected/51c3ef5e-a43e-4c76-aab9-ec9d22939005-kube-api-access-gq7nc\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j\" (UID: \"51c3ef5e-a43e-4c76-aab9-ec9d22939005\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.187318 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ldkq\" (UniqueName: \"kubernetes.io/projected/c9b5d10a-9eac-4ecf-b3c6-297e15d1f6ed-kube-api-access-2ldkq\") pod \"manila-operator-controller-manager-78c6999f6f-rvxn4\" (UID: \"c9b5d10a-9eac-4ecf-b3c6-297e15d1f6ed\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-rvxn4" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.187348 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbqzc\" (UniqueName: \"kubernetes.io/projected/83ccbdec-a448-4674-896e-9c634981df65-kube-api-access-wbqzc\") pod \"keystone-operator-controller-manager-b8b6d4659-6kns8\" (UID: \"83ccbdec-a448-4674-896e-9c634981df65\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6kns8" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.187402 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct4sc\" (UniqueName: \"kubernetes.io/projected/a4e0ef5f-5c6e-4ceb-80c2-25769c178450-kube-api-access-ct4sc\") pod \"horizon-operator-controller-manager-77d5c5b54f-7hgnh\" (UID: \"a4e0ef5f-5c6e-4ceb-80c2-25769c178450\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7hgnh" Jan 26 15:52:28 crc kubenswrapper[4713]: E0126 15:52:28.187977 4713 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 15:52:28 crc kubenswrapper[4713]: E0126 15:52:28.188022 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert podName:d4485006-069c-45c8-8515-ff65913e2d54 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:28.688006665 +0000 UTC m=+1123.825023890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert") pod "infra-operator-controller-manager-694cf4f878-rgk5d" (UID: "d4485006-069c-45c8-8515-ff65913e2d54") : secret "infra-operator-webhook-server-cert" not found Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.202860 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-g42hg"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.203728 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-g42hg" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.207304 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-qfhsl" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.241081 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.242553 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.255431 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-cndqq"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.256385 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cndqq" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.258982 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.259740 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.259986 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbqzc\" (UniqueName: \"kubernetes.io/projected/83ccbdec-a448-4674-896e-9c634981df65-kube-api-access-wbqzc\") pod \"keystone-operator-controller-manager-b8b6d4659-6kns8\" (UID: \"83ccbdec-a448-4674-896e-9c634981df65\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6kns8" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.264320 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-77cl8" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.266002 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4zkl\" (UniqueName: \"kubernetes.io/projected/21c903f2-40b2-420b-830c-64298a2a77bb-kube-api-access-j4zkl\") pod \"ironic-operator-controller-manager-598f7747c9-pnkxb\" (UID: \"21c903f2-40b2-420b-830c-64298a2a77bb\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pnkxb" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.266081 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-g42hg"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.269302 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct4sc\" (UniqueName: \"kubernetes.io/projected/a4e0ef5f-5c6e-4ceb-80c2-25769c178450-kube-api-access-ct4sc\") pod \"horizon-operator-controller-manager-77d5c5b54f-7hgnh\" (UID: \"a4e0ef5f-5c6e-4ceb-80c2-25769c178450\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7hgnh" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.270929 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-msxdq" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.284963 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sjsk\" (UniqueName: \"kubernetes.io/projected/d4485006-069c-45c8-8515-ff65913e2d54-kube-api-access-4sjsk\") pod \"infra-operator-controller-manager-694cf4f878-rgk5d\" (UID: \"d4485006-069c-45c8-8515-ff65913e2d54\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.288208 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khjxw\" (UniqueName: \"kubernetes.io/projected/67c02797-1141-4757-aa6e-de1678f8cf47-kube-api-access-khjxw\") pod \"neutron-operator-controller-manager-78d58447c5-lnw6c\" (UID: \"67c02797-1141-4757-aa6e-de1678f8cf47\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lnw6c" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.288338 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbdg6\" (UniqueName: \"kubernetes.io/projected/bcb2380b-e7a0-4f46-b6cb-23a57fa36fba-kube-api-access-pbdg6\") pod \"nova-operator-controller-manager-7bdb645866-7qhb9\" (UID: \"bcb2380b-e7a0-4f46-b6cb-23a57fa36fba\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.288402 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq7nc\" (UniqueName: \"kubernetes.io/projected/51c3ef5e-a43e-4c76-aab9-ec9d22939005-kube-api-access-gq7nc\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j\" (UID: \"51c3ef5e-a43e-4c76-aab9-ec9d22939005\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.288429 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ldkq\" (UniqueName: \"kubernetes.io/projected/c9b5d10a-9eac-4ecf-b3c6-297e15d1f6ed-kube-api-access-2ldkq\") pod \"manila-operator-controller-manager-78c6999f6f-rvxn4\" (UID: \"c9b5d10a-9eac-4ecf-b3c6-297e15d1f6ed\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-rvxn4" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.292461 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-cndqq"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.297848 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5cjq\" (UniqueName: \"kubernetes.io/projected/c967ecd2-cf7b-428e-8e86-320c481901fd-kube-api-access-m5cjq\") pod \"heat-operator-controller-manager-594c8c9d5d-q9h27\" (UID: \"c967ecd2-cf7b-428e-8e86-320c481901fd\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9h27" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.302539 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.316747 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ldkq\" (UniqueName: \"kubernetes.io/projected/c9b5d10a-9eac-4ecf-b3c6-297e15d1f6ed-kube-api-access-2ldkq\") pod \"manila-operator-controller-manager-78c6999f6f-rvxn4\" (UID: \"c9b5d10a-9eac-4ecf-b3c6-297e15d1f6ed\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-rvxn4" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.317646 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-rvxn4" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.318923 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-b6h7z"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.321817 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b6h7z" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.322886 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pnkxb" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.326125 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq7nc\" (UniqueName: \"kubernetes.io/projected/51c3ef5e-a43e-4c76-aab9-ec9d22939005-kube-api-access-gq7nc\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j\" (UID: \"51c3ef5e-a43e-4c76-aab9-ec9d22939005\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.334054 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7mz4l" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.334216 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-b6h7z"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.348584 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8sbhh" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.354062 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-lhfk5"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.356414 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lhfk5" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.364023 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-nqtk5" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.369681 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.370753 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.375606 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-pv2x7" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.378856 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-lhfk5"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.392189 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbdg6\" (UniqueName: \"kubernetes.io/projected/bcb2380b-e7a0-4f46-b6cb-23a57fa36fba-kube-api-access-pbdg6\") pod \"nova-operator-controller-manager-7bdb645866-7qhb9\" (UID: \"bcb2380b-e7a0-4f46-b6cb-23a57fa36fba\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.392243 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkkn7\" (UniqueName: \"kubernetes.io/projected/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-kube-api-access-gkkn7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dm59m\" (UID: \"a4cc3f25-acc8-4ce3-8269-2ccb7f042709\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.392285 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45mjx\" (UniqueName: \"kubernetes.io/projected/feea11ba-0497-418d-8316-8510b6d807bb-kube-api-access-45mjx\") pod \"ovn-operator-controller-manager-6f75f45d54-cndqq\" (UID: \"feea11ba-0497-418d-8316-8510b6d807bb\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cndqq" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.392315 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgl2q\" (UniqueName: \"kubernetes.io/projected/525d44f1-86e8-4e11-8022-d428ed5a8440-kube-api-access-bgl2q\") pod \"octavia-operator-controller-manager-5f4cd88d46-g42hg\" (UID: \"525d44f1-86e8-4e11-8022-d428ed5a8440\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-g42hg" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.392331 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dm59m\" (UID: \"a4cc3f25-acc8-4ce3-8269-2ccb7f042709\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.392352 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khjxw\" (UniqueName: \"kubernetes.io/projected/67c02797-1141-4757-aa6e-de1678f8cf47-kube-api-access-khjxw\") pod \"neutron-operator-controller-manager-78d58447c5-lnw6c\" (UID: \"67c02797-1141-4757-aa6e-de1678f8cf47\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lnw6c" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.392732 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6kns8" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.415969 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-t6l4x"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.417059 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-t6l4x" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.423573 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-hcf46" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.428530 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbdg6\" (UniqueName: \"kubernetes.io/projected/bcb2380b-e7a0-4f46-b6cb-23a57fa36fba-kube-api-access-pbdg6\") pod \"nova-operator-controller-manager-7bdb645866-7qhb9\" (UID: \"bcb2380b-e7a0-4f46-b6cb-23a57fa36fba\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.438253 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.483782 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-gbmmw"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.484625 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbmmw" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.485850 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khjxw\" (UniqueName: \"kubernetes.io/projected/67c02797-1141-4757-aa6e-de1678f8cf47-kube-api-access-khjxw\") pod \"neutron-operator-controller-manager-78d58447c5-lnw6c\" (UID: \"67c02797-1141-4757-aa6e-de1678f8cf47\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lnw6c" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.491400 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-gbmmw"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.494624 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm5x8\" (UniqueName: \"kubernetes.io/projected/a1e3d291-c14b-4645-9c72-dca8413eb5e7-kube-api-access-pm5x8\") pod \"telemetry-operator-controller-manager-5fd4748d4d-2q6vz\" (UID: \"a1e3d291-c14b-4645-9c72-dca8413eb5e7\") " pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.494669 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkkn7\" (UniqueName: \"kubernetes.io/projected/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-kube-api-access-gkkn7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dm59m\" (UID: \"a4cc3f25-acc8-4ce3-8269-2ccb7f042709\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.494713 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w2cz\" (UniqueName: \"kubernetes.io/projected/e9395a15-d653-40bb-bb55-8a800b1a0dae-kube-api-access-8w2cz\") pod \"swift-operator-controller-manager-547cbdb99f-lhfk5\" (UID: \"e9395a15-d653-40bb-bb55-8a800b1a0dae\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lhfk5" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.494731 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45mjx\" (UniqueName: \"kubernetes.io/projected/feea11ba-0497-418d-8316-8510b6d807bb-kube-api-access-45mjx\") pod \"ovn-operator-controller-manager-6f75f45d54-cndqq\" (UID: \"feea11ba-0497-418d-8316-8510b6d807bb\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cndqq" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.494754 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dm59m\" (UID: \"a4cc3f25-acc8-4ce3-8269-2ccb7f042709\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.494771 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgl2q\" (UniqueName: \"kubernetes.io/projected/525d44f1-86e8-4e11-8022-d428ed5a8440-kube-api-access-bgl2q\") pod \"octavia-operator-controller-manager-5f4cd88d46-g42hg\" (UID: \"525d44f1-86e8-4e11-8022-d428ed5a8440\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-g42hg" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.494803 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwxnp\" (UniqueName: \"kubernetes.io/projected/d4a41bce-dc81-49f2-80a7-06545140458d-kube-api-access-lwxnp\") pod \"placement-operator-controller-manager-79d5ccc684-b6h7z\" (UID: \"d4a41bce-dc81-49f2-80a7-06545140458d\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b6h7z" Jan 26 15:52:28 crc kubenswrapper[4713]: E0126 15:52:28.495475 4713 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:52:28 crc kubenswrapper[4713]: E0126 15:52:28.495516 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert podName:a4cc3f25-acc8-4ce3-8269-2ccb7f042709 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:28.99550109 +0000 UTC m=+1124.132518315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" (UID: "a4cc3f25-acc8-4ce3-8269-2ccb7f042709") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.496286 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9h27" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.502832 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-t6l4x"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.511125 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-pvwdp" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.521809 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7hgnh" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.534448 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkkn7\" (UniqueName: \"kubernetes.io/projected/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-kube-api-access-gkkn7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dm59m\" (UID: \"a4cc3f25-acc8-4ce3-8269-2ccb7f042709\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.553413 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45mjx\" (UniqueName: \"kubernetes.io/projected/feea11ba-0497-418d-8316-8510b6d807bb-kube-api-access-45mjx\") pod \"ovn-operator-controller-manager-6f75f45d54-cndqq\" (UID: \"feea11ba-0497-418d-8316-8510b6d807bb\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cndqq" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.558130 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.558853 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgl2q\" (UniqueName: \"kubernetes.io/projected/525d44f1-86e8-4e11-8022-d428ed5a8440-kube-api-access-bgl2q\") pod \"octavia-operator-controller-manager-5f4cd88d46-g42hg\" (UID: \"525d44f1-86e8-4e11-8022-d428ed5a8440\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-g42hg" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.596533 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk554\" (UniqueName: \"kubernetes.io/projected/0e1fcfa7-ee98-4834-93b3-578a9463adca-kube-api-access-fk554\") pod \"test-operator-controller-manager-69797bbcbd-t6l4x\" (UID: \"0e1fcfa7-ee98-4834-93b3-578a9463adca\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-t6l4x" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.596597 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w2cz\" (UniqueName: \"kubernetes.io/projected/e9395a15-d653-40bb-bb55-8a800b1a0dae-kube-api-access-8w2cz\") pod \"swift-operator-controller-manager-547cbdb99f-lhfk5\" (UID: \"e9395a15-d653-40bb-bb55-8a800b1a0dae\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lhfk5" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.596689 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwxnp\" (UniqueName: \"kubernetes.io/projected/d4a41bce-dc81-49f2-80a7-06545140458d-kube-api-access-lwxnp\") pod \"placement-operator-controller-manager-79d5ccc684-b6h7z\" (UID: \"d4a41bce-dc81-49f2-80a7-06545140458d\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b6h7z" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.601874 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rztt2\" (UniqueName: \"kubernetes.io/projected/69bffd4a-b644-47b2-90ba-83716eb3b40b-kube-api-access-rztt2\") pod \"watcher-operator-controller-manager-564965969-gbmmw\" (UID: \"69bffd4a-b644-47b2-90ba-83716eb3b40b\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-gbmmw" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.604042 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm5x8\" (UniqueName: \"kubernetes.io/projected/a1e3d291-c14b-4645-9c72-dca8413eb5e7-kube-api-access-pm5x8\") pod \"telemetry-operator-controller-manager-5fd4748d4d-2q6vz\" (UID: \"a1e3d291-c14b-4645-9c72-dca8413eb5e7\") " pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.620304 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cndqq" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.634903 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w2cz\" (UniqueName: \"kubernetes.io/projected/e9395a15-d653-40bb-bb55-8a800b1a0dae-kube-api-access-8w2cz\") pod \"swift-operator-controller-manager-547cbdb99f-lhfk5\" (UID: \"e9395a15-d653-40bb-bb55-8a800b1a0dae\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lhfk5" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.640932 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm5x8\" (UniqueName: \"kubernetes.io/projected/a1e3d291-c14b-4645-9c72-dca8413eb5e7-kube-api-access-pm5x8\") pod \"telemetry-operator-controller-manager-5fd4748d4d-2q6vz\" (UID: \"a1e3d291-c14b-4645-9c72-dca8413eb5e7\") " pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.656098 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.662449 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwxnp\" (UniqueName: \"kubernetes.io/projected/d4a41bce-dc81-49f2-80a7-06545140458d-kube-api-access-lwxnp\") pod \"placement-operator-controller-manager-79d5ccc684-b6h7z\" (UID: \"d4a41bce-dc81-49f2-80a7-06545140458d\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b6h7z" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.663485 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b6h7z" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.670535 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.675266 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-dnhkc" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.675515 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.675512 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.675934 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lnw6c" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.683250 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lhfk5" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.706404 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.707873 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert\") pod \"infra-operator-controller-manager-694cf4f878-rgk5d\" (UID: \"d4485006-069c-45c8-8515-ff65913e2d54\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.707960 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.707984 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.708006 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk554\" (UniqueName: \"kubernetes.io/projected/0e1fcfa7-ee98-4834-93b3-578a9463adca-kube-api-access-fk554\") pod \"test-operator-controller-manager-69797bbcbd-t6l4x\" (UID: \"0e1fcfa7-ee98-4834-93b3-578a9463adca\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-t6l4x" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.708050 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blj5z\" (UniqueName: \"kubernetes.io/projected/a523ff90-92c7-49b5-a532-20d7b7246892-kube-api-access-blj5z\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.708111 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rztt2\" (UniqueName: \"kubernetes.io/projected/69bffd4a-b644-47b2-90ba-83716eb3b40b-kube-api-access-rztt2\") pod \"watcher-operator-controller-manager-564965969-gbmmw\" (UID: \"69bffd4a-b644-47b2-90ba-83716eb3b40b\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-gbmmw" Jan 26 15:52:28 crc kubenswrapper[4713]: E0126 15:52:28.708476 4713 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 15:52:28 crc kubenswrapper[4713]: E0126 15:52:28.708521 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert podName:d4485006-069c-45c8-8515-ff65913e2d54 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:29.708507287 +0000 UTC m=+1124.845524522 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert") pod "infra-operator-controller-manager-694cf4f878-rgk5d" (UID: "d4485006-069c-45c8-8515-ff65913e2d54") : secret "infra-operator-webhook-server-cert" not found Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.716403 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.731700 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.743037 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rztt2\" (UniqueName: \"kubernetes.io/projected/69bffd4a-b644-47b2-90ba-83716eb3b40b-kube-api-access-rztt2\") pod \"watcher-operator-controller-manager-564965969-gbmmw\" (UID: \"69bffd4a-b644-47b2-90ba-83716eb3b40b\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-gbmmw" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.752721 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk554\" (UniqueName: \"kubernetes.io/projected/0e1fcfa7-ee98-4834-93b3-578a9463adca-kube-api-access-fk554\") pod \"test-operator-controller-manager-69797bbcbd-t6l4x\" (UID: \"0e1fcfa7-ee98-4834-93b3-578a9463adca\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-t6l4x" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.757732 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbmmw" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.794761 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-g42hg" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.814490 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.814538 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.814583 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blj5z\" (UniqueName: \"kubernetes.io/projected/a523ff90-92c7-49b5-a532-20d7b7246892-kube-api-access-blj5z\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:28 crc kubenswrapper[4713]: E0126 15:52:28.815837 4713 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:52:28 crc kubenswrapper[4713]: E0126 15:52:28.815891 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs podName:a523ff90-92c7-49b5-a532-20d7b7246892 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:29.31587285 +0000 UTC m=+1124.452890085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs") pod "openstack-operator-controller-manager-7d6b58b596-rpgqj" (UID: "a523ff90-92c7-49b5-a532-20d7b7246892") : secret "webhook-server-cert" not found Jan 26 15:52:28 crc kubenswrapper[4713]: E0126 15:52:28.815949 4713 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:52:28 crc kubenswrapper[4713]: E0126 15:52:28.816041 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs podName:a523ff90-92c7-49b5-a532-20d7b7246892 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:29.316009374 +0000 UTC m=+1124.453026689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs") pod "openstack-operator-controller-manager-7d6b58b596-rpgqj" (UID: "a523ff90-92c7-49b5-a532-20d7b7246892") : secret "metrics-server-cert" not found Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.849767 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blj5z\" (UniqueName: \"kubernetes.io/projected/a523ff90-92c7-49b5-a532-20d7b7246892-kube-api-access-blj5z\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.862140 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-cqv2q"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.911100 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-44jqj"] Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.912098 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-44jqj" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.919566 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-74nmv" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.928760 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqlp5\" (UniqueName: \"kubernetes.io/projected/3161c386-6b19-4c7e-aa02-8a95984cc71c-kube-api-access-fqlp5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-44jqj\" (UID: \"3161c386-6b19-4c7e-aa02-8a95984cc71c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-44jqj" Jan 26 15:52:28 crc kubenswrapper[4713]: I0126 15:52:28.953280 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-44jqj"] Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.031524 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqlp5\" (UniqueName: \"kubernetes.io/projected/3161c386-6b19-4c7e-aa02-8a95984cc71c-kube-api-access-fqlp5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-44jqj\" (UID: \"3161c386-6b19-4c7e-aa02-8a95984cc71c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-44jqj" Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.031655 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dm59m\" (UID: \"a4cc3f25-acc8-4ce3-8269-2ccb7f042709\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" Jan 26 15:52:29 crc kubenswrapper[4713]: E0126 15:52:29.031838 4713 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:52:29 crc kubenswrapper[4713]: E0126 15:52:29.031887 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert podName:a4cc3f25-acc8-4ce3-8269-2ccb7f042709 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:30.031871242 +0000 UTC m=+1125.168888477 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" (UID: "a4cc3f25-acc8-4ce3-8269-2ccb7f042709") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.042345 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-t6l4x" Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.072235 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-gjzk8"] Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.092376 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqlp5\" (UniqueName: \"kubernetes.io/projected/3161c386-6b19-4c7e-aa02-8a95984cc71c-kube-api-access-fqlp5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-44jqj\" (UID: \"3161c386-6b19-4c7e-aa02-8a95984cc71c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-44jqj" Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.251414 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-fzsfn"] Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.281447 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-44jqj" Jan 26 15:52:29 crc kubenswrapper[4713]: W0126 15:52:29.331165 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad9b077e_c81e_4cf5_bc8d_c7405e7b25c4.slice/crio-e752725e05a67f3c81819532595e550b9e58b391728bdb578e00d00147d749c6 WatchSource:0}: Error finding container e752725e05a67f3c81819532595e550b9e58b391728bdb578e00d00147d749c6: Status 404 returned error can't find the container with id e752725e05a67f3c81819532595e550b9e58b391728bdb578e00d00147d749c6 Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.336088 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.336137 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:29 crc kubenswrapper[4713]: E0126 15:52:29.336348 4713 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:52:29 crc kubenswrapper[4713]: E0126 15:52:29.336594 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs podName:a523ff90-92c7-49b5-a532-20d7b7246892 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:30.336579317 +0000 UTC m=+1125.473596552 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs") pod "openstack-operator-controller-manager-7d6b58b596-rpgqj" (UID: "a523ff90-92c7-49b5-a532-20d7b7246892") : secret "metrics-server-cert" not found Jan 26 15:52:29 crc kubenswrapper[4713]: E0126 15:52:29.336900 4713 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:52:29 crc kubenswrapper[4713]: E0126 15:52:29.336925 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs podName:a523ff90-92c7-49b5-a532-20d7b7246892 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:30.336918236 +0000 UTC m=+1125.473935471 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs") pod "openstack-operator-controller-manager-7d6b58b596-rpgqj" (UID: "a523ff90-92c7-49b5-a532-20d7b7246892") : secret "webhook-server-cert" not found Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.391017 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-rvxn4"] Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.748235 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert\") pod \"infra-operator-controller-manager-694cf4f878-rgk5d\" (UID: \"d4485006-069c-45c8-8515-ff65913e2d54\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" Jan 26 15:52:29 crc kubenswrapper[4713]: E0126 15:52:29.748493 4713 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 15:52:29 crc kubenswrapper[4713]: E0126 15:52:29.748549 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert podName:d4485006-069c-45c8-8515-ff65913e2d54 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:31.748534542 +0000 UTC m=+1126.885551777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert") pod "infra-operator-controller-manager-694cf4f878-rgk5d" (UID: "d4485006-069c-45c8-8515-ff65913e2d54") : secret "infra-operator-webhook-server-cert" not found Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.773759 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j"] Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.788965 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-6kns8"] Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.823487 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9h27" event={"ID":"c967ecd2-cf7b-428e-8e86-320c481901fd","Type":"ContainerStarted","Data":"cb53f2be1f26048d7b1892ee85316d5492676c320fc34dafbc084fce2bf596a6"} Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.823526 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9h27"] Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.827847 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-fzsfn" event={"ID":"ad9b077e-c81e-4cf5-bc8d-c7405e7b25c4","Type":"ContainerStarted","Data":"e752725e05a67f3c81819532595e550b9e58b391728bdb578e00d00147d749c6"} Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.835029 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6kns8" event={"ID":"83ccbdec-a448-4674-896e-9c634981df65","Type":"ContainerStarted","Data":"9ffa73f68cd1be73126b6772e62c3b969bcc2e069f31cde89cabce7a0e34239d"} Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.837352 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-rvxn4" event={"ID":"c9b5d10a-9eac-4ecf-b3c6-297e15d1f6ed","Type":"ContainerStarted","Data":"1b93cdd2840d0fdc38a1786d03e2e4aaab3703347491a9dba22c3716afe00c0a"} Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.846575 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gjzk8" event={"ID":"fed44574-f4a7-42df-9179-b2f8a64d180e","Type":"ContainerStarted","Data":"1cf9d7508f6d530b353871598b806d6d3aa726e9d17a0f3fcbffe7871856f47e"} Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.849414 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j" event={"ID":"51c3ef5e-a43e-4c76-aab9-ec9d22939005","Type":"ContainerStarted","Data":"9330651e3431ee7dcdec313b1a0f42a79f27fa25386b9f3f5aa663cd9c4acfd1"} Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.850754 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cqv2q" event={"ID":"6fab0ebc-dfbb-45f5-9802-5cf0145acf7b","Type":"ContainerStarted","Data":"7ee58df73b20a785f8abedd171143520f62af37b333b8e6e55a93e5867b1466b"} Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.880975 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7hgnh"] Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.902014 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-pnkxb"] Jan 26 15:52:29 crc kubenswrapper[4713]: W0126 15:52:29.903436 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfeea11ba_0497_418d_8316_8510b6d807bb.slice/crio-ea9ef31ea401082a92e25ee1e32392e3e46c8cd314e0ca564c3eddf66e4465f3 WatchSource:0}: Error finding container ea9ef31ea401082a92e25ee1e32392e3e46c8cd314e0ca564c3eddf66e4465f3: Status 404 returned error can't find the container with id ea9ef31ea401082a92e25ee1e32392e3e46c8cd314e0ca564c3eddf66e4465f3 Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.905101 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-cndqq"] Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.919255 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-lhfk5"] Jan 26 15:52:29 crc kubenswrapper[4713]: W0126 15:52:29.924793 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9395a15_d653_40bb_bb55_8a800b1a0dae.slice/crio-4b0ef0e6fa08fa6171a6f73277f338329fccf95163d6d99aef5c6d5dad5895b1 WatchSource:0}: Error finding container 4b0ef0e6fa08fa6171a6f73277f338329fccf95163d6d99aef5c6d5dad5895b1: Status 404 returned error can't find the container with id 4b0ef0e6fa08fa6171a6f73277f338329fccf95163d6d99aef5c6d5dad5895b1 Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.932111 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-lnw6c"] Jan 26 15:52:29 crc kubenswrapper[4713]: I0126 15:52:29.936263 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-8sbhh"] Jan 26 15:52:29 crc kubenswrapper[4713]: W0126 15:52:29.946671 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67c02797_1141_4757_aa6e_de1678f8cf47.slice/crio-ba11b66554f1e2f3b143070397f88b9894680fdda8c5f05a49554796f227bd26 WatchSource:0}: Error finding container ba11b66554f1e2f3b143070397f88b9894680fdda8c5f05a49554796f227bd26: Status 404 returned error can't find the container with id ba11b66554f1e2f3b143070397f88b9894680fdda8c5f05a49554796f227bd26 Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.061214 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dm59m\" (UID: \"a4cc3f25-acc8-4ce3-8269-2ccb7f042709\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" Jan 26 15:52:30 crc kubenswrapper[4713]: E0126 15:52:30.061593 4713 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:52:30 crc kubenswrapper[4713]: E0126 15:52:30.061651 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert podName:a4cc3f25-acc8-4ce3-8269-2ccb7f042709 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:32.061633876 +0000 UTC m=+1127.198651111 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" (UID: "a4cc3f25-acc8-4ce3-8269-2ccb7f042709") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.094764 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-b6h7z"] Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.164038 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9"] Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.230937 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-gbmmw"] Jan 26 15:52:30 crc kubenswrapper[4713]: E0126 15:52:30.232074 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pbdg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-7qhb9_openstack-operators(bcb2380b-e7a0-4f46-b6cb-23a57fa36fba): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 15:52:30 crc kubenswrapper[4713]: E0126 15:52:30.233516 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9" podUID="bcb2380b-e7a0-4f46-b6cb-23a57fa36fba" Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.366966 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:30 crc kubenswrapper[4713]: E0126 15:52:30.367016 4713 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.367036 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:30 crc kubenswrapper[4713]: E0126 15:52:30.367072 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs podName:a523ff90-92c7-49b5-a532-20d7b7246892 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:32.367056212 +0000 UTC m=+1127.504073447 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs") pod "openstack-operator-controller-manager-7d6b58b596-rpgqj" (UID: "a523ff90-92c7-49b5-a532-20d7b7246892") : secret "webhook-server-cert" not found Jan 26 15:52:30 crc kubenswrapper[4713]: E0126 15:52:30.367185 4713 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:52:30 crc kubenswrapper[4713]: E0126 15:52:30.367221 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs podName:a523ff90-92c7-49b5-a532-20d7b7246892 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:32.367210097 +0000 UTC m=+1127.504227342 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs") pod "openstack-operator-controller-manager-7d6b58b596-rpgqj" (UID: "a523ff90-92c7-49b5-a532-20d7b7246892") : secret "metrics-server-cert" not found Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.434330 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-t6l4x"] Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.448608 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-g42hg"] Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.475401 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz"] Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.495074 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-44jqj"] Jan 26 15:52:30 crc kubenswrapper[4713]: W0126 15:52:30.510269 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1e3d291_c14b_4645_9c72_dca8413eb5e7.slice/crio-47cd1fa9f2d13d96de8d432cd203d6e9122e673b30932d3db0d27e0c871efa21 WatchSource:0}: Error finding container 47cd1fa9f2d13d96de8d432cd203d6e9122e673b30932d3db0d27e0c871efa21: Status 404 returned error can't find the container with id 47cd1fa9f2d13d96de8d432cd203d6e9122e673b30932d3db0d27e0c871efa21 Jan 26 15:52:30 crc kubenswrapper[4713]: E0126 15:52:30.522651 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.47:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pm5x8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5fd4748d4d-2q6vz_openstack-operators(a1e3d291-c14b-4645-9c72-dca8413eb5e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 15:52:30 crc kubenswrapper[4713]: E0126 15:52:30.523812 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz" podUID="a1e3d291-c14b-4645-9c72-dca8413eb5e7" Jan 26 15:52:30 crc kubenswrapper[4713]: E0126 15:52:30.532954 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fqlp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-44jqj_openstack-operators(3161c386-6b19-4c7e-aa02-8a95984cc71c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 15:52:30 crc kubenswrapper[4713]: E0126 15:52:30.534340 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-44jqj" podUID="3161c386-6b19-4c7e-aa02-8a95984cc71c" Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.865073 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-g42hg" event={"ID":"525d44f1-86e8-4e11-8022-d428ed5a8440","Type":"ContainerStarted","Data":"b2a40cf487fae3596351654c043f35e8bb52e8337da1a5ea9468b52fc8e2eb61"} Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.867136 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cndqq" event={"ID":"feea11ba-0497-418d-8316-8510b6d807bb","Type":"ContainerStarted","Data":"ea9ef31ea401082a92e25ee1e32392e3e46c8cd314e0ca564c3eddf66e4465f3"} Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.869274 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-44jqj" event={"ID":"3161c386-6b19-4c7e-aa02-8a95984cc71c","Type":"ContainerStarted","Data":"a34b6bba85b7736e441a4315a28c33597c4043d5ebbfa3d3816c1ea193b788d0"} Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.871473 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8sbhh" event={"ID":"6d87a00d-b4a5-449e-b744-d9680cbba82e","Type":"ContainerStarted","Data":"1c76580c949462513e14af66293da886aa6c392a226f4231c5d638a0076cf551"} Jan 26 15:52:30 crc kubenswrapper[4713]: E0126 15:52:30.875289 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-44jqj" podUID="3161c386-6b19-4c7e-aa02-8a95984cc71c" Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.875414 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz" event={"ID":"a1e3d291-c14b-4645-9c72-dca8413eb5e7","Type":"ContainerStarted","Data":"47cd1fa9f2d13d96de8d432cd203d6e9122e673b30932d3db0d27e0c871efa21"} Jan 26 15:52:30 crc kubenswrapper[4713]: E0126 15:52:30.877623 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.47:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz" podUID="a1e3d291-c14b-4645-9c72-dca8413eb5e7" Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.878588 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b6h7z" event={"ID":"d4a41bce-dc81-49f2-80a7-06545140458d","Type":"ContainerStarted","Data":"2387552a971d63cf5c926f3cc6b9087f5a83715513789e4b7daa3bc80c6b532b"} Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.882198 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbmmw" event={"ID":"69bffd4a-b644-47b2-90ba-83716eb3b40b","Type":"ContainerStarted","Data":"e7803ec9a4ba0367473baecd51a1782d3238b49649f44f52111c88786f485937"} Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.885085 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pnkxb" event={"ID":"21c903f2-40b2-420b-830c-64298a2a77bb","Type":"ContainerStarted","Data":"f89d61bfefc45f6e8b4c27dd261b0ecbf09acf6388b6aaaa9643f2f46a1f4c4a"} Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.890930 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lhfk5" event={"ID":"e9395a15-d653-40bb-bb55-8a800b1a0dae","Type":"ContainerStarted","Data":"4b0ef0e6fa08fa6171a6f73277f338329fccf95163d6d99aef5c6d5dad5895b1"} Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.894513 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7hgnh" event={"ID":"a4e0ef5f-5c6e-4ceb-80c2-25769c178450","Type":"ContainerStarted","Data":"0fad13b39ef25c938d4a8e99d071c10c66f86eaa24347a316ee87aff4c3e848e"} Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.897105 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9" event={"ID":"bcb2380b-e7a0-4f46-b6cb-23a57fa36fba","Type":"ContainerStarted","Data":"3a04c659f329a02fb5f56edfba302a0daa6e80c2ffc73b899b5faf727ff27afc"} Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.898103 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lnw6c" event={"ID":"67c02797-1141-4757-aa6e-de1678f8cf47","Type":"ContainerStarted","Data":"ba11b66554f1e2f3b143070397f88b9894680fdda8c5f05a49554796f227bd26"} Jan 26 15:52:30 crc kubenswrapper[4713]: I0126 15:52:30.900159 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-t6l4x" event={"ID":"0e1fcfa7-ee98-4834-93b3-578a9463adca","Type":"ContainerStarted","Data":"bb583672aeceea4f6ecc1bf0b1e110fe182428711a5877a5f65102aedb73d70c"} Jan 26 15:52:30 crc kubenswrapper[4713]: E0126 15:52:30.901946 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9" podUID="bcb2380b-e7a0-4f46-b6cb-23a57fa36fba" Jan 26 15:52:31 crc kubenswrapper[4713]: I0126 15:52:31.786183 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert\") pod \"infra-operator-controller-manager-694cf4f878-rgk5d\" (UID: \"d4485006-069c-45c8-8515-ff65913e2d54\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" Jan 26 15:52:31 crc kubenswrapper[4713]: E0126 15:52:31.786746 4713 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 15:52:31 crc kubenswrapper[4713]: E0126 15:52:31.786809 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert podName:d4485006-069c-45c8-8515-ff65913e2d54 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:35.78678769 +0000 UTC m=+1130.923804925 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert") pod "infra-operator-controller-manager-694cf4f878-rgk5d" (UID: "d4485006-069c-45c8-8515-ff65913e2d54") : secret "infra-operator-webhook-server-cert" not found Jan 26 15:52:31 crc kubenswrapper[4713]: E0126 15:52:31.920061 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-44jqj" podUID="3161c386-6b19-4c7e-aa02-8a95984cc71c" Jan 26 15:52:31 crc kubenswrapper[4713]: E0126 15:52:31.920466 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9" podUID="bcb2380b-e7a0-4f46-b6cb-23a57fa36fba" Jan 26 15:52:31 crc kubenswrapper[4713]: E0126 15:52:31.920517 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.47:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz" podUID="a1e3d291-c14b-4645-9c72-dca8413eb5e7" Jan 26 15:52:32 crc kubenswrapper[4713]: I0126 15:52:32.091280 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dm59m\" (UID: \"a4cc3f25-acc8-4ce3-8269-2ccb7f042709\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" Jan 26 15:52:32 crc kubenswrapper[4713]: E0126 15:52:32.092009 4713 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:52:32 crc kubenswrapper[4713]: E0126 15:52:32.092103 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert podName:a4cc3f25-acc8-4ce3-8269-2ccb7f042709 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:36.092084933 +0000 UTC m=+1131.229102168 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" (UID: "a4cc3f25-acc8-4ce3-8269-2ccb7f042709") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:52:32 crc kubenswrapper[4713]: I0126 15:52:32.404683 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:32 crc kubenswrapper[4713]: I0126 15:52:32.404738 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:32 crc kubenswrapper[4713]: E0126 15:52:32.404943 4713 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:52:32 crc kubenswrapper[4713]: E0126 15:52:32.405003 4713 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:52:32 crc kubenswrapper[4713]: E0126 15:52:32.405072 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs podName:a523ff90-92c7-49b5-a532-20d7b7246892 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:36.405017142 +0000 UTC m=+1131.542034377 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs") pod "openstack-operator-controller-manager-7d6b58b596-rpgqj" (UID: "a523ff90-92c7-49b5-a532-20d7b7246892") : secret "metrics-server-cert" not found Jan 26 15:52:32 crc kubenswrapper[4713]: E0126 15:52:32.405090 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs podName:a523ff90-92c7-49b5-a532-20d7b7246892 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:36.405083104 +0000 UTC m=+1131.542100339 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs") pod "openstack-operator-controller-manager-7d6b58b596-rpgqj" (UID: "a523ff90-92c7-49b5-a532-20d7b7246892") : secret "webhook-server-cert" not found Jan 26 15:52:35 crc kubenswrapper[4713]: I0126 15:52:35.884858 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert\") pod \"infra-operator-controller-manager-694cf4f878-rgk5d\" (UID: \"d4485006-069c-45c8-8515-ff65913e2d54\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" Jan 26 15:52:35 crc kubenswrapper[4713]: E0126 15:52:35.885000 4713 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 15:52:35 crc kubenswrapper[4713]: E0126 15:52:35.885511 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert podName:d4485006-069c-45c8-8515-ff65913e2d54 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:43.885458263 +0000 UTC m=+1139.022475498 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert") pod "infra-operator-controller-manager-694cf4f878-rgk5d" (UID: "d4485006-069c-45c8-8515-ff65913e2d54") : secret "infra-operator-webhook-server-cert" not found Jan 26 15:52:36 crc kubenswrapper[4713]: I0126 15:52:36.190852 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dm59m\" (UID: \"a4cc3f25-acc8-4ce3-8269-2ccb7f042709\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" Jan 26 15:52:36 crc kubenswrapper[4713]: E0126 15:52:36.191036 4713 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:52:36 crc kubenswrapper[4713]: E0126 15:52:36.191087 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert podName:a4cc3f25-acc8-4ce3-8269-2ccb7f042709 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:44.191072765 +0000 UTC m=+1139.328090000 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" (UID: "a4cc3f25-acc8-4ce3-8269-2ccb7f042709") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:52:36 crc kubenswrapper[4713]: I0126 15:52:36.494921 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:36 crc kubenswrapper[4713]: I0126 15:52:36.495011 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:36 crc kubenswrapper[4713]: E0126 15:52:36.495094 4713 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:52:36 crc kubenswrapper[4713]: E0126 15:52:36.495188 4713 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:52:36 crc kubenswrapper[4713]: E0126 15:52:36.495207 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs podName:a523ff90-92c7-49b5-a532-20d7b7246892 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:44.495182023 +0000 UTC m=+1139.632199288 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs") pod "openstack-operator-controller-manager-7d6b58b596-rpgqj" (UID: "a523ff90-92c7-49b5-a532-20d7b7246892") : secret "webhook-server-cert" not found Jan 26 15:52:36 crc kubenswrapper[4713]: E0126 15:52:36.495257 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs podName:a523ff90-92c7-49b5-a532-20d7b7246892 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:44.495237394 +0000 UTC m=+1139.632254729 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs") pod "openstack-operator-controller-manager-7d6b58b596-rpgqj" (UID: "a523ff90-92c7-49b5-a532-20d7b7246892") : secret "metrics-server-cert" not found Jan 26 15:52:43 crc kubenswrapper[4713]: E0126 15:52:43.491228 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 26 15:52:43 crc kubenswrapper[4713]: E0126 15:52:43.491812 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2ldkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-rvxn4_openstack-operators(c9b5d10a-9eac-4ecf-b3c6-297e15d1f6ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:52:43 crc kubenswrapper[4713]: E0126 15:52:43.493586 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-rvxn4" podUID="c9b5d10a-9eac-4ecf-b3c6-297e15d1f6ed" Jan 26 15:52:43 crc kubenswrapper[4713]: I0126 15:52:43.903019 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert\") pod \"infra-operator-controller-manager-694cf4f878-rgk5d\" (UID: \"d4485006-069c-45c8-8515-ff65913e2d54\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" Jan 26 15:52:43 crc kubenswrapper[4713]: I0126 15:52:43.913939 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4485006-069c-45c8-8515-ff65913e2d54-cert\") pod \"infra-operator-controller-manager-694cf4f878-rgk5d\" (UID: \"d4485006-069c-45c8-8515-ff65913e2d54\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" Jan 26 15:52:44 crc kubenswrapper[4713]: E0126 15:52:44.031295 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-rvxn4" podUID="c9b5d10a-9eac-4ecf-b3c6-297e15d1f6ed" Jan 26 15:52:44 crc kubenswrapper[4713]: I0126 15:52:44.177465 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" Jan 26 15:52:44 crc kubenswrapper[4713]: I0126 15:52:44.210520 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dm59m\" (UID: \"a4cc3f25-acc8-4ce3-8269-2ccb7f042709\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" Jan 26 15:52:44 crc kubenswrapper[4713]: I0126 15:52:44.221046 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4cc3f25-acc8-4ce3-8269-2ccb7f042709-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dm59m\" (UID: \"a4cc3f25-acc8-4ce3-8269-2ccb7f042709\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" Jan 26 15:52:44 crc kubenswrapper[4713]: I0126 15:52:44.431578 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" Jan 26 15:52:44 crc kubenswrapper[4713]: I0126 15:52:44.516060 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:44 crc kubenswrapper[4713]: I0126 15:52:44.516110 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:52:44 crc kubenswrapper[4713]: E0126 15:52:44.516268 4713 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:52:44 crc kubenswrapper[4713]: E0126 15:52:44.516297 4713 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:52:44 crc kubenswrapper[4713]: E0126 15:52:44.516321 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs podName:a523ff90-92c7-49b5-a532-20d7b7246892 nodeName:}" failed. No retries permitted until 2026-01-26 15:53:00.516305333 +0000 UTC m=+1155.653322568 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs") pod "openstack-operator-controller-manager-7d6b58b596-rpgqj" (UID: "a523ff90-92c7-49b5-a532-20d7b7246892") : secret "metrics-server-cert" not found Jan 26 15:52:44 crc kubenswrapper[4713]: E0126 15:52:44.516418 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs podName:a523ff90-92c7-49b5-a532-20d7b7246892 nodeName:}" failed. No retries permitted until 2026-01-26 15:53:00.516394325 +0000 UTC m=+1155.653411620 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs") pod "openstack-operator-controller-manager-7d6b58b596-rpgqj" (UID: "a523ff90-92c7-49b5-a532-20d7b7246892") : secret "webhook-server-cert" not found Jan 26 15:52:45 crc kubenswrapper[4713]: E0126 15:52:45.600955 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 26 15:52:45 crc kubenswrapper[4713]: E0126 15:52:45.601175 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j4zkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-pnkxb_openstack-operators(21c903f2-40b2-420b-830c-64298a2a77bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:52:45 crc kubenswrapper[4713]: E0126 15:52:45.602500 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pnkxb" podUID="21c903f2-40b2-420b-830c-64298a2a77bb" Jan 26 15:52:46 crc kubenswrapper[4713]: E0126 15:52:46.049980 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pnkxb" podUID="21c903f2-40b2-420b-830c-64298a2a77bb" Jan 26 15:52:46 crc kubenswrapper[4713]: E0126 15:52:46.409581 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7" Jan 26 15:52:46 crc kubenswrapper[4713]: E0126 15:52:46.409792 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pqk5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-7478f7dbf9-fzsfn_openstack-operators(ad9b077e-c81e-4cf5-bc8d-c7405e7b25c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:52:46 crc kubenswrapper[4713]: E0126 15:52:46.410911 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-fzsfn" podUID="ad9b077e-c81e-4cf5-bc8d-c7405e7b25c4" Jan 26 15:52:47 crc kubenswrapper[4713]: E0126 15:52:47.054511 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-fzsfn" podUID="ad9b077e-c81e-4cf5-bc8d-c7405e7b25c4" Jan 26 15:52:47 crc kubenswrapper[4713]: E0126 15:52:47.111479 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 26 15:52:47 crc kubenswrapper[4713]: E0126 15:52:47.111735 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qsct8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-gjzk8_openstack-operators(fed44574-f4a7-42df-9179-b2f8a64d180e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:52:47 crc kubenswrapper[4713]: E0126 15:52:47.112920 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gjzk8" podUID="fed44574-f4a7-42df-9179-b2f8a64d180e" Jan 26 15:52:48 crc kubenswrapper[4713]: E0126 15:52:48.063915 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gjzk8" podUID="fed44574-f4a7-42df-9179-b2f8a64d180e" Jan 26 15:52:49 crc kubenswrapper[4713]: E0126 15:52:49.832070 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 26 15:52:49 crc kubenswrapper[4713]: E0126 15:52:49.832302 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jnxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-cqv2q_openstack-operators(6fab0ebc-dfbb-45f5-9802-5cf0145acf7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:52:49 crc kubenswrapper[4713]: E0126 15:52:49.833757 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cqv2q" podUID="6fab0ebc-dfbb-45f5-9802-5cf0145acf7b" Jan 26 15:52:50 crc kubenswrapper[4713]: E0126 15:52:50.079283 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cqv2q" podUID="6fab0ebc-dfbb-45f5-9802-5cf0145acf7b" Jan 26 15:52:50 crc kubenswrapper[4713]: E0126 15:52:50.810267 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 26 15:52:50 crc kubenswrapper[4713]: E0126 15:52:50.810744 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gq7nc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j_openstack-operators(51c3ef5e-a43e-4c76-aab9-ec9d22939005): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:52:50 crc kubenswrapper[4713]: E0126 15:52:50.812359 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j" podUID="51c3ef5e-a43e-4c76-aab9-ec9d22939005" Jan 26 15:52:51 crc kubenswrapper[4713]: E0126 15:52:51.091225 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j" podUID="51c3ef5e-a43e-4c76-aab9-ec9d22939005" Jan 26 15:52:51 crc kubenswrapper[4713]: E0126 15:52:51.392130 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 26 15:52:51 crc kubenswrapper[4713]: E0126 15:52:51.392326 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-khjxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-lnw6c_openstack-operators(67c02797-1141-4757-aa6e-de1678f8cf47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:52:51 crc kubenswrapper[4713]: E0126 15:52:51.393590 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lnw6c" podUID="67c02797-1141-4757-aa6e-de1678f8cf47" Jan 26 15:52:52 crc kubenswrapper[4713]: E0126 15:52:52.017222 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 26 15:52:52 crc kubenswrapper[4713]: E0126 15:52:52.017403 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lwxnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-b6h7z_openstack-operators(d4a41bce-dc81-49f2-80a7-06545140458d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:52:52 crc kubenswrapper[4713]: E0126 15:52:52.018608 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b6h7z" podUID="d4a41bce-dc81-49f2-80a7-06545140458d" Jan 26 15:52:52 crc kubenswrapper[4713]: E0126 15:52:52.096218 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lnw6c" podUID="67c02797-1141-4757-aa6e-de1678f8cf47" Jan 26 15:52:52 crc kubenswrapper[4713]: E0126 15:52:52.096642 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b6h7z" podUID="d4a41bce-dc81-49f2-80a7-06545140458d" Jan 26 15:52:52 crc kubenswrapper[4713]: E0126 15:52:52.760372 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 26 15:52:52 crc kubenswrapper[4713]: E0126 15:52:52.760605 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wbqzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-6kns8_openstack-operators(83ccbdec-a448-4674-896e-9c634981df65): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:52:52 crc kubenswrapper[4713]: E0126 15:52:52.762649 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6kns8" podUID="83ccbdec-a448-4674-896e-9c634981df65" Jan 26 15:52:53 crc kubenswrapper[4713]: E0126 15:52:53.113245 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6kns8" podUID="83ccbdec-a448-4674-896e-9c634981df65" Jan 26 15:52:53 crc kubenswrapper[4713]: E0126 15:52:53.456732 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 26 15:52:53 crc kubenswrapper[4713]: E0126 15:52:53.457139 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m5cjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-q9h27_openstack-operators(c967ecd2-cf7b-428e-8e86-320c481901fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:52:53 crc kubenswrapper[4713]: E0126 15:52:53.458280 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9h27" podUID="c967ecd2-cf7b-428e-8e86-320c481901fd" Jan 26 15:52:53 crc kubenswrapper[4713]: E0126 15:52:53.978457 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 26 15:52:53 crc kubenswrapper[4713]: E0126 15:52:53.978631 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-45mjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-cndqq_openstack-operators(feea11ba-0497-418d-8316-8510b6d807bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:52:53 crc kubenswrapper[4713]: E0126 15:52:53.979940 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cndqq" podUID="feea11ba-0497-418d-8316-8510b6d807bb" Jan 26 15:52:54 crc kubenswrapper[4713]: E0126 15:52:54.124260 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cndqq" podUID="feea11ba-0497-418d-8316-8510b6d807bb" Jan 26 15:52:54 crc kubenswrapper[4713]: E0126 15:52:54.124325 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9h27" podUID="c967ecd2-cf7b-428e-8e86-320c481901fd" Jan 26 15:52:54 crc kubenswrapper[4713]: I0126 15:52:54.763822 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d"] Jan 26 15:52:55 crc kubenswrapper[4713]: I0126 15:52:55.137170 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" event={"ID":"d4485006-069c-45c8-8515-ff65913e2d54","Type":"ContainerStarted","Data":"804d3f7f2a58ed5a8b00ab469e194d54f13c4da130f6e019cda078998c43b610"} Jan 26 15:52:55 crc kubenswrapper[4713]: I0126 15:52:55.410131 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m"] Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.152200 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lhfk5" event={"ID":"e9395a15-d653-40bb-bb55-8a800b1a0dae","Type":"ContainerStarted","Data":"dede4dd13f1b11968210178109df25aedb04e5e09a78e86d737070f832ae373b"} Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.153245 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lhfk5" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.158832 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9" event={"ID":"bcb2380b-e7a0-4f46-b6cb-23a57fa36fba","Type":"ContainerStarted","Data":"f05aa50627d7ca7c1fd0d40922049e0fff09b8d704a3ba7e0e3e5ae6f4139b99"} Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.159424 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.179671 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lhfk5" podStartSLOduration=4.6651607330000004 podStartE2EDuration="28.17964736s" podCreationTimestamp="2026-01-26 15:52:28 +0000 UTC" firstStartedPulling="2026-01-26 15:52:29.936138699 +0000 UTC m=+1125.073155934" lastFinishedPulling="2026-01-26 15:52:53.450625326 +0000 UTC m=+1148.587642561" observedRunningTime="2026-01-26 15:52:56.175208044 +0000 UTC m=+1151.312225279" watchObservedRunningTime="2026-01-26 15:52:56.17964736 +0000 UTC m=+1151.316664595" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.181239 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-t6l4x" event={"ID":"0e1fcfa7-ee98-4834-93b3-578a9463adca","Type":"ContainerStarted","Data":"ed804fb699fa7516d2a21e6050897c30cf3e3dcd08763fd90fbca99813f6c992"} Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.181947 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-t6l4x" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.207566 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbmmw" event={"ID":"69bffd4a-b644-47b2-90ba-83716eb3b40b","Type":"ContainerStarted","Data":"fc4b0fd05281cfa85a19b425e872f2618966e4eb82efe9691488c877e5e6c887"} Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.208414 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbmmw" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.211010 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9" podStartSLOduration=3.337215009 podStartE2EDuration="28.210991869s" podCreationTimestamp="2026-01-26 15:52:28 +0000 UTC" firstStartedPulling="2026-01-26 15:52:30.231863551 +0000 UTC m=+1125.368880786" lastFinishedPulling="2026-01-26 15:52:55.105640411 +0000 UTC m=+1150.242657646" observedRunningTime="2026-01-26 15:52:56.207332645 +0000 UTC m=+1151.344349880" watchObservedRunningTime="2026-01-26 15:52:56.210991869 +0000 UTC m=+1151.348009094" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.213981 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz" event={"ID":"a1e3d291-c14b-4645-9c72-dca8413eb5e7","Type":"ContainerStarted","Data":"628ef39712212ed2610a20bc33e76e5f52343d7ab3a208f12c32584c3564408b"} Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.214722 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.216045 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-g42hg" event={"ID":"525d44f1-86e8-4e11-8022-d428ed5a8440","Type":"ContainerStarted","Data":"14354244ed4985aa7cdb73d5b3646917b81d3998d2cc5514b000ffa737b4d702"} Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.216418 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-g42hg" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.226148 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-44jqj" event={"ID":"3161c386-6b19-4c7e-aa02-8a95984cc71c","Type":"ContainerStarted","Data":"51bdd30cf97bfe8913fd48b3a0c54a054f9a04c2d54840baa7f616c28c959282"} Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.235027 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7hgnh" event={"ID":"a4e0ef5f-5c6e-4ceb-80c2-25769c178450","Type":"ContainerStarted","Data":"e35a4ba3e21d18ab7fa1cd1d10aea189e9cac8ecb10b0c825d62f22f1f9a2035"} Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.235853 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7hgnh" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.252488 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-t6l4x" podStartSLOduration=4.78043011 podStartE2EDuration="28.252467744s" podCreationTimestamp="2026-01-26 15:52:28 +0000 UTC" firstStartedPulling="2026-01-26 15:52:30.495243085 +0000 UTC m=+1125.632260320" lastFinishedPulling="2026-01-26 15:52:53.967280719 +0000 UTC m=+1149.104297954" observedRunningTime="2026-01-26 15:52:56.247318108 +0000 UTC m=+1151.384335343" watchObservedRunningTime="2026-01-26 15:52:56.252467744 +0000 UTC m=+1151.389484979" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.282658 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8sbhh" event={"ID":"6d87a00d-b4a5-449e-b744-d9680cbba82e","Type":"ContainerStarted","Data":"19c4e6ea9765410a52edb230571da520e982d5778e524f7226983fa29adb5f3d"} Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.283482 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8sbhh" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.304581 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" event={"ID":"a4cc3f25-acc8-4ce3-8269-2ccb7f042709","Type":"ContainerStarted","Data":"8357c7a111ab299a7ecba388e747c06bd9bd704f4ec3ef9cb9f773262438a560"} Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.314244 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz" podStartSLOduration=3.5986844380000003 podStartE2EDuration="28.314223784s" podCreationTimestamp="2026-01-26 15:52:28 +0000 UTC" firstStartedPulling="2026-01-26 15:52:30.522507598 +0000 UTC m=+1125.659524833" lastFinishedPulling="2026-01-26 15:52:55.238046944 +0000 UTC m=+1150.375064179" observedRunningTime="2026-01-26 15:52:56.312756203 +0000 UTC m=+1151.449773438" watchObservedRunningTime="2026-01-26 15:52:56.314223784 +0000 UTC m=+1151.451241019" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.342228 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbmmw" podStartSLOduration=5.128812305 podStartE2EDuration="28.342208008s" podCreationTimestamp="2026-01-26 15:52:28 +0000 UTC" firstStartedPulling="2026-01-26 15:52:30.23183134 +0000 UTC m=+1125.368848565" lastFinishedPulling="2026-01-26 15:52:53.445226993 +0000 UTC m=+1148.582244268" observedRunningTime="2026-01-26 15:52:56.335551339 +0000 UTC m=+1151.472568574" watchObservedRunningTime="2026-01-26 15:52:56.342208008 +0000 UTC m=+1151.479225243" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.405853 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-g42hg" podStartSLOduration=5.469911072 podStartE2EDuration="28.405831501s" podCreationTimestamp="2026-01-26 15:52:28 +0000 UTC" firstStartedPulling="2026-01-26 15:52:30.509089708 +0000 UTC m=+1125.646106943" lastFinishedPulling="2026-01-26 15:52:53.445010137 +0000 UTC m=+1148.582027372" observedRunningTime="2026-01-26 15:52:56.37263766 +0000 UTC m=+1151.509654895" watchObservedRunningTime="2026-01-26 15:52:56.405831501 +0000 UTC m=+1151.542848736" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.407472 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-44jqj" podStartSLOduration=3.745818909 podStartE2EDuration="28.407465287s" podCreationTimestamp="2026-01-26 15:52:28 +0000 UTC" firstStartedPulling="2026-01-26 15:52:30.5328164 +0000 UTC m=+1125.669833635" lastFinishedPulling="2026-01-26 15:52:55.194462778 +0000 UTC m=+1150.331480013" observedRunningTime="2026-01-26 15:52:56.402357572 +0000 UTC m=+1151.539374807" watchObservedRunningTime="2026-01-26 15:52:56.407465287 +0000 UTC m=+1151.544482512" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.433031 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7hgnh" podStartSLOduration=5.898011684 podStartE2EDuration="29.433011051s" podCreationTimestamp="2026-01-26 15:52:27 +0000 UTC" firstStartedPulling="2026-01-26 15:52:29.910316658 +0000 UTC m=+1125.047333893" lastFinishedPulling="2026-01-26 15:52:53.445316005 +0000 UTC m=+1148.582333260" observedRunningTime="2026-01-26 15:52:56.428096342 +0000 UTC m=+1151.565113577" watchObservedRunningTime="2026-01-26 15:52:56.433011051 +0000 UTC m=+1151.570028286" Jan 26 15:52:56 crc kubenswrapper[4713]: I0126 15:52:56.456310 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8sbhh" podStartSLOduration=5.957569072 podStartE2EDuration="29.456287601s" podCreationTimestamp="2026-01-26 15:52:27 +0000 UTC" firstStartedPulling="2026-01-26 15:52:29.951430653 +0000 UTC m=+1125.088447888" lastFinishedPulling="2026-01-26 15:52:53.450149142 +0000 UTC m=+1148.587166417" observedRunningTime="2026-01-26 15:52:56.44884944 +0000 UTC m=+1151.585866675" watchObservedRunningTime="2026-01-26 15:52:56.456287601 +0000 UTC m=+1151.593304836" Jan 26 15:53:00 crc kubenswrapper[4713]: I0126 15:53:00.336242 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-rvxn4" event={"ID":"c9b5d10a-9eac-4ecf-b3c6-297e15d1f6ed","Type":"ContainerStarted","Data":"9e8df9e427649300b2972892218c97dd2491f955225a2f76d3000401fa2ce8cb"} Jan 26 15:53:00 crc kubenswrapper[4713]: I0126 15:53:00.336839 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-rvxn4" Jan 26 15:53:00 crc kubenswrapper[4713]: I0126 15:53:00.337288 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" event={"ID":"d4485006-069c-45c8-8515-ff65913e2d54","Type":"ContainerStarted","Data":"719c10a343bebbc17e6b64463bac5a643f12aa27b9c3fee861e0786d30c876d6"} Jan 26 15:53:00 crc kubenswrapper[4713]: I0126 15:53:00.337400 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" Jan 26 15:53:00 crc kubenswrapper[4713]: I0126 15:53:00.338333 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" event={"ID":"a4cc3f25-acc8-4ce3-8269-2ccb7f042709","Type":"ContainerStarted","Data":"2b826f354b2cd69eef9594a4d152e512aa4ddaf9ba4643aefd9f39b794968415"} Jan 26 15:53:00 crc kubenswrapper[4713]: I0126 15:53:00.338478 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" Jan 26 15:53:00 crc kubenswrapper[4713]: I0126 15:53:00.351231 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-rvxn4" podStartSLOduration=3.245557148 podStartE2EDuration="33.351217659s" podCreationTimestamp="2026-01-26 15:52:27 +0000 UTC" firstStartedPulling="2026-01-26 15:52:29.494599945 +0000 UTC m=+1124.631617180" lastFinishedPulling="2026-01-26 15:52:59.600260456 +0000 UTC m=+1154.737277691" observedRunningTime="2026-01-26 15:53:00.350278263 +0000 UTC m=+1155.487295508" watchObservedRunningTime="2026-01-26 15:53:00.351217659 +0000 UTC m=+1155.488234894" Jan 26 15:53:00 crc kubenswrapper[4713]: I0126 15:53:00.375420 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" podStartSLOduration=28.192714221 podStartE2EDuration="32.375405835s" podCreationTimestamp="2026-01-26 15:52:28 +0000 UTC" firstStartedPulling="2026-01-26 15:52:55.416826451 +0000 UTC m=+1150.553843686" lastFinishedPulling="2026-01-26 15:52:59.599518045 +0000 UTC m=+1154.736535300" observedRunningTime="2026-01-26 15:53:00.37242825 +0000 UTC m=+1155.509445485" watchObservedRunningTime="2026-01-26 15:53:00.375405835 +0000 UTC m=+1155.512423070" Jan 26 15:53:00 crc kubenswrapper[4713]: I0126 15:53:00.396705 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" podStartSLOduration=28.823922369 podStartE2EDuration="33.396679328s" podCreationTimestamp="2026-01-26 15:52:27 +0000 UTC" firstStartedPulling="2026-01-26 15:52:55.026316623 +0000 UTC m=+1150.163333858" lastFinishedPulling="2026-01-26 15:52:59.599073582 +0000 UTC m=+1154.736090817" observedRunningTime="2026-01-26 15:53:00.391007567 +0000 UTC m=+1155.528024812" watchObservedRunningTime="2026-01-26 15:53:00.396679328 +0000 UTC m=+1155.533696573" Jan 26 15:53:00 crc kubenswrapper[4713]: I0126 15:53:00.594053 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:53:00 crc kubenswrapper[4713]: I0126 15:53:00.594113 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:53:00 crc kubenswrapper[4713]: I0126 15:53:00.599919 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:53:00 crc kubenswrapper[4713]: I0126 15:53:00.599942 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a523ff90-92c7-49b5-a532-20d7b7246892-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-rpgqj\" (UID: \"a523ff90-92c7-49b5-a532-20d7b7246892\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:53:00 crc kubenswrapper[4713]: I0126 15:53:00.877002 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-dnhkc" Jan 26 15:53:00 crc kubenswrapper[4713]: I0126 15:53:00.885961 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:53:01 crc kubenswrapper[4713]: I0126 15:53:01.393380 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj"] Jan 26 15:53:02 crc kubenswrapper[4713]: I0126 15:53:02.355519 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" event={"ID":"a523ff90-92c7-49b5-a532-20d7b7246892","Type":"ContainerStarted","Data":"a71549d4c5ea47cd1734c3f823fd803bf05196273c79db9837e999c322516964"} Jan 26 15:53:02 crc kubenswrapper[4713]: I0126 15:53:02.355873 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" event={"ID":"a523ff90-92c7-49b5-a532-20d7b7246892","Type":"ContainerStarted","Data":"e13bd3fedff402a1d1b6a625087fa5e737dc3c1ef8609c1ca60d372e31027bd6"} Jan 26 15:53:02 crc kubenswrapper[4713]: I0126 15:53:02.355890 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:53:02 crc kubenswrapper[4713]: I0126 15:53:02.357322 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pnkxb" event={"ID":"21c903f2-40b2-420b-830c-64298a2a77bb","Type":"ContainerStarted","Data":"0c252bd6f3c34c4561bdb35f871ce02964e77d55b1593e57a148e46761652822"} Jan 26 15:53:02 crc kubenswrapper[4713]: I0126 15:53:02.357534 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pnkxb" Jan 26 15:53:02 crc kubenswrapper[4713]: I0126 15:53:02.358890 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-fzsfn" event={"ID":"ad9b077e-c81e-4cf5-bc8d-c7405e7b25c4","Type":"ContainerStarted","Data":"775863d639360a87fbaf2dd2565576e6da8fcd3d3fe782a0e34d2110683dc879"} Jan 26 15:53:02 crc kubenswrapper[4713]: I0126 15:53:02.359063 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-fzsfn" Jan 26 15:53:02 crc kubenswrapper[4713]: I0126 15:53:02.396553 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" podStartSLOduration=34.396534796 podStartE2EDuration="34.396534796s" podCreationTimestamp="2026-01-26 15:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:53:02.391592756 +0000 UTC m=+1157.528610011" watchObservedRunningTime="2026-01-26 15:53:02.396534796 +0000 UTC m=+1157.533552031" Jan 26 15:53:02 crc kubenswrapper[4713]: I0126 15:53:02.426096 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-fzsfn" podStartSLOduration=3.079116671 podStartE2EDuration="35.426078033s" podCreationTimestamp="2026-01-26 15:52:27 +0000 UTC" firstStartedPulling="2026-01-26 15:52:29.343022259 +0000 UTC m=+1124.480039504" lastFinishedPulling="2026-01-26 15:53:01.689983631 +0000 UTC m=+1156.827000866" observedRunningTime="2026-01-26 15:53:02.419224959 +0000 UTC m=+1157.556242194" watchObservedRunningTime="2026-01-26 15:53:02.426078033 +0000 UTC m=+1157.563095268" Jan 26 15:53:02 crc kubenswrapper[4713]: I0126 15:53:02.433801 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pnkxb" podStartSLOduration=4.122997587 podStartE2EDuration="35.433781302s" podCreationTimestamp="2026-01-26 15:52:27 +0000 UTC" firstStartedPulling="2026-01-26 15:52:29.912118029 +0000 UTC m=+1125.049135264" lastFinishedPulling="2026-01-26 15:53:01.222901734 +0000 UTC m=+1156.359918979" observedRunningTime="2026-01-26 15:53:02.433030421 +0000 UTC m=+1157.570047666" watchObservedRunningTime="2026-01-26 15:53:02.433781302 +0000 UTC m=+1157.570798537" Jan 26 15:53:03 crc kubenswrapper[4713]: I0126 15:53:03.301702 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:53:03 crc kubenswrapper[4713]: I0126 15:53:03.302024 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:53:04 crc kubenswrapper[4713]: I0126 15:53:04.182813 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rgk5d" Jan 26 15:53:04 crc kubenswrapper[4713]: I0126 15:53:04.436415 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dm59m" Jan 26 15:53:08 crc kubenswrapper[4713]: I0126 15:53:08.066841 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-fzsfn" Jan 26 15:53:08 crc kubenswrapper[4713]: I0126 15:53:08.322172 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-rvxn4" Jan 26 15:53:08 crc kubenswrapper[4713]: I0126 15:53:08.325929 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pnkxb" Jan 26 15:53:08 crc kubenswrapper[4713]: I0126 15:53:08.352058 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8sbhh" Jan 26 15:53:08 crc kubenswrapper[4713]: I0126 15:53:08.527107 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7hgnh" Jan 26 15:53:08 crc kubenswrapper[4713]: I0126 15:53:08.691544 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lhfk5" Jan 26 15:53:08 crc kubenswrapper[4713]: I0126 15:53:08.720162 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2q6vz" Jan 26 15:53:08 crc kubenswrapper[4713]: I0126 15:53:08.735063 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7qhb9" Jan 26 15:53:08 crc kubenswrapper[4713]: I0126 15:53:08.760595 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbmmw" Jan 26 15:53:08 crc kubenswrapper[4713]: I0126 15:53:08.799806 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-g42hg" Jan 26 15:53:09 crc kubenswrapper[4713]: I0126 15:53:09.045087 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-t6l4x" Jan 26 15:53:10 crc kubenswrapper[4713]: I0126 15:53:10.897523 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-rpgqj" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.483268 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6kns8" event={"ID":"83ccbdec-a448-4674-896e-9c634981df65","Type":"ContainerStarted","Data":"5f9626a9d6a814243c8bdb2101f4ec04aaf0e2c5aa26a736c2ff66afd75f97a7"} Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.484751 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6kns8" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.486520 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b6h7z" event={"ID":"d4a41bce-dc81-49f2-80a7-06545140458d","Type":"ContainerStarted","Data":"b449b9f3d10ee4f6ab9d6a93e6a64fc3ce89bbd667762eb1667e2987bd2ffaca"} Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.486938 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b6h7z" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.490711 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gjzk8" event={"ID":"fed44574-f4a7-42df-9179-b2f8a64d180e","Type":"ContainerStarted","Data":"7b67075e3cee95571bb697b2e68f52577504867113f85164f0c80653a4e12e26"} Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.491235 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gjzk8" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.492733 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cndqq" event={"ID":"feea11ba-0497-418d-8316-8510b6d807bb","Type":"ContainerStarted","Data":"0be7925bcb1af3e79963af07c2f8cb108be822c78ce7c27004062f5a7b0127a7"} Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.493074 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cndqq" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.494330 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j" event={"ID":"51c3ef5e-a43e-4c76-aab9-ec9d22939005","Type":"ContainerStarted","Data":"aaaacc7e5b6c5bcc616cb52a74d5216552fc633271d675b0e764e457e26b2e59"} Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.494695 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.496278 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cqv2q" event={"ID":"6fab0ebc-dfbb-45f5-9802-5cf0145acf7b","Type":"ContainerStarted","Data":"08aebb41486cfdf0f1181c22e87c6e125fbd9df545ac371b1be8c4873771c9e0"} Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.496696 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cqv2q" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.498453 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lnw6c" event={"ID":"67c02797-1141-4757-aa6e-de1678f8cf47","Type":"ContainerStarted","Data":"30766b94df4c6a0d7825ef1db9e6fd2d819dfc39ef28e353e335b9fad21d531e"} Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.498621 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lnw6c" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.500224 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9h27" event={"ID":"c967ecd2-cf7b-428e-8e86-320c481901fd","Type":"ContainerStarted","Data":"c07bc7b6e4b9315d727a9e7427c7a2661d2a2d122ebfdeac08f07ba4d1cc6c6d"} Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.500396 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9h27" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.591533 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6kns8" podStartSLOduration=3.367499413 podStartE2EDuration="52.591514638s" podCreationTimestamp="2026-01-26 15:52:27 +0000 UTC" firstStartedPulling="2026-01-26 15:52:29.796204683 +0000 UTC m=+1124.933221918" lastFinishedPulling="2026-01-26 15:53:19.020219908 +0000 UTC m=+1174.157237143" observedRunningTime="2026-01-26 15:53:19.502700411 +0000 UTC m=+1174.639717646" watchObservedRunningTime="2026-01-26 15:53:19.591514638 +0000 UTC m=+1174.728531873" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.592402 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9h27" podStartSLOduration=3.365765424 podStartE2EDuration="52.592397343s" podCreationTimestamp="2026-01-26 15:52:27 +0000 UTC" firstStartedPulling="2026-01-26 15:52:29.795958936 +0000 UTC m=+1124.932976171" lastFinishedPulling="2026-01-26 15:53:19.022590855 +0000 UTC m=+1174.159608090" observedRunningTime="2026-01-26 15:53:19.589836471 +0000 UTC m=+1174.726853736" watchObservedRunningTime="2026-01-26 15:53:19.592397343 +0000 UTC m=+1174.729414578" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.656541 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lnw6c" podStartSLOduration=3.588273012 podStartE2EDuration="52.656523681s" podCreationTimestamp="2026-01-26 15:52:27 +0000 UTC" firstStartedPulling="2026-01-26 15:52:29.951139585 +0000 UTC m=+1125.088156820" lastFinishedPulling="2026-01-26 15:53:19.019390254 +0000 UTC m=+1174.156407489" observedRunningTime="2026-01-26 15:53:19.651860779 +0000 UTC m=+1174.788878014" watchObservedRunningTime="2026-01-26 15:53:19.656523681 +0000 UTC m=+1174.793540916" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.674354 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b6h7z" podStartSLOduration=2.866445755 podStartE2EDuration="51.674336716s" podCreationTimestamp="2026-01-26 15:52:28 +0000 UTC" firstStartedPulling="2026-01-26 15:52:30.212887953 +0000 UTC m=+1125.349905188" lastFinishedPulling="2026-01-26 15:53:19.020778914 +0000 UTC m=+1174.157796149" observedRunningTime="2026-01-26 15:53:19.668572512 +0000 UTC m=+1174.805589747" watchObservedRunningTime="2026-01-26 15:53:19.674336716 +0000 UTC m=+1174.811353951" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.746980 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j" podStartSLOduration=3.523167066 podStartE2EDuration="52.746958184s" podCreationTimestamp="2026-01-26 15:52:27 +0000 UTC" firstStartedPulling="2026-01-26 15:52:29.795807382 +0000 UTC m=+1124.932824627" lastFinishedPulling="2026-01-26 15:53:19.01959851 +0000 UTC m=+1174.156615745" observedRunningTime="2026-01-26 15:53:19.740539382 +0000 UTC m=+1174.877556617" watchObservedRunningTime="2026-01-26 15:53:19.746958184 +0000 UTC m=+1174.883975449" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.769349 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cqv2q" podStartSLOduration=2.810196069 podStartE2EDuration="52.769333148s" podCreationTimestamp="2026-01-26 15:52:27 +0000 UTC" firstStartedPulling="2026-01-26 15:52:29.042118312 +0000 UTC m=+1124.179135547" lastFinishedPulling="2026-01-26 15:53:19.001255391 +0000 UTC m=+1174.138272626" observedRunningTime="2026-01-26 15:53:19.765203411 +0000 UTC m=+1174.902220646" watchObservedRunningTime="2026-01-26 15:53:19.769333148 +0000 UTC m=+1174.906350383" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.788619 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gjzk8" podStartSLOduration=3.01453339 podStartE2EDuration="52.788602324s" podCreationTimestamp="2026-01-26 15:52:27 +0000 UTC" firstStartedPulling="2026-01-26 15:52:29.24916951 +0000 UTC m=+1124.386186735" lastFinishedPulling="2026-01-26 15:53:19.023238434 +0000 UTC m=+1174.160255669" observedRunningTime="2026-01-26 15:53:19.783836549 +0000 UTC m=+1174.920853784" watchObservedRunningTime="2026-01-26 15:53:19.788602324 +0000 UTC m=+1174.925619559" Jan 26 15:53:19 crc kubenswrapper[4713]: I0126 15:53:19.927225 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cndqq" podStartSLOduration=2.826499623 podStartE2EDuration="51.927201942s" podCreationTimestamp="2026-01-26 15:52:28 +0000 UTC" firstStartedPulling="2026-01-26 15:52:29.912930812 +0000 UTC m=+1125.049948047" lastFinishedPulling="2026-01-26 15:53:19.013633131 +0000 UTC m=+1174.150650366" observedRunningTime="2026-01-26 15:53:19.913740511 +0000 UTC m=+1175.050757746" watchObservedRunningTime="2026-01-26 15:53:19.927201942 +0000 UTC m=+1175.064219177" Jan 26 15:53:28 crc kubenswrapper[4713]: I0126 15:53:28.110152 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cqv2q" Jan 26 15:53:28 crc kubenswrapper[4713]: I0126 15:53:28.154187 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gjzk8" Jan 26 15:53:28 crc kubenswrapper[4713]: I0126 15:53:28.398456 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6kns8" Jan 26 15:53:28 crc kubenswrapper[4713]: I0126 15:53:28.499169 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9h27" Jan 26 15:53:28 crc kubenswrapper[4713]: I0126 15:53:28.560882 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j" Jan 26 15:53:28 crc kubenswrapper[4713]: I0126 15:53:28.624773 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cndqq" Jan 26 15:53:28 crc kubenswrapper[4713]: I0126 15:53:28.670175 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b6h7z" Jan 26 15:53:28 crc kubenswrapper[4713]: I0126 15:53:28.683833 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lnw6c" Jan 26 15:53:33 crc kubenswrapper[4713]: I0126 15:53:33.302118 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:53:33 crc kubenswrapper[4713]: I0126 15:53:33.302861 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.484026 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kng8c"] Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.486095 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kng8c" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.488985 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.489220 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.489595 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.489783 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5slhr" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.498751 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kng8c"] Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.515333 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z6sbp"] Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.516854 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.520123 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.550832 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z6sbp"] Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.579460 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flcvj\" (UniqueName: \"kubernetes.io/projected/c458bb3d-b382-44a5-8bca-276644fa267b-kube-api-access-flcvj\") pod \"dnsmasq-dns-675f4bcbfc-kng8c\" (UID: \"c458bb3d-b382-44a5-8bca-276644fa267b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kng8c" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.579562 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c458bb3d-b382-44a5-8bca-276644fa267b-config\") pod \"dnsmasq-dns-675f4bcbfc-kng8c\" (UID: \"c458bb3d-b382-44a5-8bca-276644fa267b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kng8c" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.681390 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c458bb3d-b382-44a5-8bca-276644fa267b-config\") pod \"dnsmasq-dns-675f4bcbfc-kng8c\" (UID: \"c458bb3d-b382-44a5-8bca-276644fa267b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kng8c" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.681463 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21773aec-9c2f-46ed-8057-27ae422f2536-config\") pod \"dnsmasq-dns-78dd6ddcc-z6sbp\" (UID: \"21773aec-9c2f-46ed-8057-27ae422f2536\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.681537 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flcvj\" (UniqueName: \"kubernetes.io/projected/c458bb3d-b382-44a5-8bca-276644fa267b-kube-api-access-flcvj\") pod \"dnsmasq-dns-675f4bcbfc-kng8c\" (UID: \"c458bb3d-b382-44a5-8bca-276644fa267b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kng8c" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.681642 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21773aec-9c2f-46ed-8057-27ae422f2536-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-z6sbp\" (UID: \"21773aec-9c2f-46ed-8057-27ae422f2536\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.681682 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9ssh\" (UniqueName: \"kubernetes.io/projected/21773aec-9c2f-46ed-8057-27ae422f2536-kube-api-access-t9ssh\") pod \"dnsmasq-dns-78dd6ddcc-z6sbp\" (UID: \"21773aec-9c2f-46ed-8057-27ae422f2536\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.682579 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c458bb3d-b382-44a5-8bca-276644fa267b-config\") pod \"dnsmasq-dns-675f4bcbfc-kng8c\" (UID: \"c458bb3d-b382-44a5-8bca-276644fa267b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kng8c" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.700458 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flcvj\" (UniqueName: \"kubernetes.io/projected/c458bb3d-b382-44a5-8bca-276644fa267b-kube-api-access-flcvj\") pod \"dnsmasq-dns-675f4bcbfc-kng8c\" (UID: \"c458bb3d-b382-44a5-8bca-276644fa267b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kng8c" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.782483 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21773aec-9c2f-46ed-8057-27ae422f2536-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-z6sbp\" (UID: \"21773aec-9c2f-46ed-8057-27ae422f2536\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.782535 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9ssh\" (UniqueName: \"kubernetes.io/projected/21773aec-9c2f-46ed-8057-27ae422f2536-kube-api-access-t9ssh\") pod \"dnsmasq-dns-78dd6ddcc-z6sbp\" (UID: \"21773aec-9c2f-46ed-8057-27ae422f2536\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.782570 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21773aec-9c2f-46ed-8057-27ae422f2536-config\") pod \"dnsmasq-dns-78dd6ddcc-z6sbp\" (UID: \"21773aec-9c2f-46ed-8057-27ae422f2536\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.783404 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21773aec-9c2f-46ed-8057-27ae422f2536-config\") pod \"dnsmasq-dns-78dd6ddcc-z6sbp\" (UID: \"21773aec-9c2f-46ed-8057-27ae422f2536\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.785025 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.793457 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21773aec-9c2f-46ed-8057-27ae422f2536-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-z6sbp\" (UID: \"21773aec-9c2f-46ed-8057-27ae422f2536\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.802094 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9ssh\" (UniqueName: \"kubernetes.io/projected/21773aec-9c2f-46ed-8057-27ae422f2536-kube-api-access-t9ssh\") pod \"dnsmasq-dns-78dd6ddcc-z6sbp\" (UID: \"21773aec-9c2f-46ed-8057-27ae422f2536\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.806231 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5slhr" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.814583 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kng8c" Jan 26 15:53:45 crc kubenswrapper[4713]: I0126 15:53:45.836861 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" Jan 26 15:53:46 crc kubenswrapper[4713]: I0126 15:53:46.171906 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kng8c"] Jan 26 15:53:46 crc kubenswrapper[4713]: I0126 15:53:46.411786 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z6sbp"] Jan 26 15:53:46 crc kubenswrapper[4713]: W0126 15:53:46.418613 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21773aec_9c2f_46ed_8057_27ae422f2536.slice/crio-9a5171c7b7e426066a01026b7dab561dffc6d21b57e8ada491825b920696dbbf WatchSource:0}: Error finding container 9a5171c7b7e426066a01026b7dab561dffc6d21b57e8ada491825b920696dbbf: Status 404 returned error can't find the container with id 9a5171c7b7e426066a01026b7dab561dffc6d21b57e8ada491825b920696dbbf Jan 26 15:53:46 crc kubenswrapper[4713]: I0126 15:53:46.747521 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kng8c" event={"ID":"c458bb3d-b382-44a5-8bca-276644fa267b","Type":"ContainerStarted","Data":"131ca8cc9e24dde574ed1d721f8121f6f4f61bea0c9894f081ff9e7bf38f5855"} Jan 26 15:53:46 crc kubenswrapper[4713]: I0126 15:53:46.749251 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" event={"ID":"21773aec-9c2f-46ed-8057-27ae422f2536","Type":"ContainerStarted","Data":"9a5171c7b7e426066a01026b7dab561dffc6d21b57e8ada491825b920696dbbf"} Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.372306 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kng8c"] Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.413126 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jq69z"] Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.420117 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jq69z" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.431909 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jq69z"] Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.461421 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a65ef24-0e05-415a-b7b7-6b44012b6c66-dns-svc\") pod \"dnsmasq-dns-666b6646f7-jq69z\" (UID: \"2a65ef24-0e05-415a-b7b7-6b44012b6c66\") " pod="openstack/dnsmasq-dns-666b6646f7-jq69z" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.464654 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fst5m\" (UniqueName: \"kubernetes.io/projected/2a65ef24-0e05-415a-b7b7-6b44012b6c66-kube-api-access-fst5m\") pod \"dnsmasq-dns-666b6646f7-jq69z\" (UID: \"2a65ef24-0e05-415a-b7b7-6b44012b6c66\") " pod="openstack/dnsmasq-dns-666b6646f7-jq69z" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.464898 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a65ef24-0e05-415a-b7b7-6b44012b6c66-config\") pod \"dnsmasq-dns-666b6646f7-jq69z\" (UID: \"2a65ef24-0e05-415a-b7b7-6b44012b6c66\") " pod="openstack/dnsmasq-dns-666b6646f7-jq69z" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.567931 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a65ef24-0e05-415a-b7b7-6b44012b6c66-config\") pod \"dnsmasq-dns-666b6646f7-jq69z\" (UID: \"2a65ef24-0e05-415a-b7b7-6b44012b6c66\") " pod="openstack/dnsmasq-dns-666b6646f7-jq69z" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.568021 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a65ef24-0e05-415a-b7b7-6b44012b6c66-dns-svc\") pod \"dnsmasq-dns-666b6646f7-jq69z\" (UID: \"2a65ef24-0e05-415a-b7b7-6b44012b6c66\") " pod="openstack/dnsmasq-dns-666b6646f7-jq69z" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.568043 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fst5m\" (UniqueName: \"kubernetes.io/projected/2a65ef24-0e05-415a-b7b7-6b44012b6c66-kube-api-access-fst5m\") pod \"dnsmasq-dns-666b6646f7-jq69z\" (UID: \"2a65ef24-0e05-415a-b7b7-6b44012b6c66\") " pod="openstack/dnsmasq-dns-666b6646f7-jq69z" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.569187 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a65ef24-0e05-415a-b7b7-6b44012b6c66-config\") pod \"dnsmasq-dns-666b6646f7-jq69z\" (UID: \"2a65ef24-0e05-415a-b7b7-6b44012b6c66\") " pod="openstack/dnsmasq-dns-666b6646f7-jq69z" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.569807 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a65ef24-0e05-415a-b7b7-6b44012b6c66-dns-svc\") pod \"dnsmasq-dns-666b6646f7-jq69z\" (UID: \"2a65ef24-0e05-415a-b7b7-6b44012b6c66\") " pod="openstack/dnsmasq-dns-666b6646f7-jq69z" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.606100 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fst5m\" (UniqueName: \"kubernetes.io/projected/2a65ef24-0e05-415a-b7b7-6b44012b6c66-kube-api-access-fst5m\") pod \"dnsmasq-dns-666b6646f7-jq69z\" (UID: \"2a65ef24-0e05-415a-b7b7-6b44012b6c66\") " pod="openstack/dnsmasq-dns-666b6646f7-jq69z" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.715806 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z6sbp"] Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.743745 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jq69z" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.747700 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rc2tb"] Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.749253 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.761742 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rc2tb"] Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.771281 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/446e46b1-a8cc-40fc-8947-d49fd0241bdd-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rc2tb\" (UID: \"446e46b1-a8cc-40fc-8947-d49fd0241bdd\") " pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.771374 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hg7p\" (UniqueName: \"kubernetes.io/projected/446e46b1-a8cc-40fc-8947-d49fd0241bdd-kube-api-access-8hg7p\") pod \"dnsmasq-dns-57d769cc4f-rc2tb\" (UID: \"446e46b1-a8cc-40fc-8947-d49fd0241bdd\") " pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.771427 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446e46b1-a8cc-40fc-8947-d49fd0241bdd-config\") pod \"dnsmasq-dns-57d769cc4f-rc2tb\" (UID: \"446e46b1-a8cc-40fc-8947-d49fd0241bdd\") " pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.872732 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/446e46b1-a8cc-40fc-8947-d49fd0241bdd-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rc2tb\" (UID: \"446e46b1-a8cc-40fc-8947-d49fd0241bdd\") " pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.872801 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hg7p\" (UniqueName: \"kubernetes.io/projected/446e46b1-a8cc-40fc-8947-d49fd0241bdd-kube-api-access-8hg7p\") pod \"dnsmasq-dns-57d769cc4f-rc2tb\" (UID: \"446e46b1-a8cc-40fc-8947-d49fd0241bdd\") " pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.872831 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446e46b1-a8cc-40fc-8947-d49fd0241bdd-config\") pod \"dnsmasq-dns-57d769cc4f-rc2tb\" (UID: \"446e46b1-a8cc-40fc-8947-d49fd0241bdd\") " pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.874200 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446e46b1-a8cc-40fc-8947-d49fd0241bdd-config\") pod \"dnsmasq-dns-57d769cc4f-rc2tb\" (UID: \"446e46b1-a8cc-40fc-8947-d49fd0241bdd\") " pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.874204 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/446e46b1-a8cc-40fc-8947-d49fd0241bdd-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rc2tb\" (UID: \"446e46b1-a8cc-40fc-8947-d49fd0241bdd\") " pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" Jan 26 15:53:48 crc kubenswrapper[4713]: I0126 15:53:48.902600 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hg7p\" (UniqueName: \"kubernetes.io/projected/446e46b1-a8cc-40fc-8947-d49fd0241bdd-kube-api-access-8hg7p\") pod \"dnsmasq-dns-57d769cc4f-rc2tb\" (UID: \"446e46b1-a8cc-40fc-8947-d49fd0241bdd\") " pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.076122 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.336151 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jq69z"] Jan 26 15:53:49 crc kubenswrapper[4713]: W0126 15:53:49.338474 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a65ef24_0e05_415a_b7b7_6b44012b6c66.slice/crio-d798656b594a274767d4c5811d0bb68fb00ff9de0533d763a0cb2ea5e4c4f9eb WatchSource:0}: Error finding container d798656b594a274767d4c5811d0bb68fb00ff9de0533d763a0cb2ea5e4c4f9eb: Status 404 returned error can't find the container with id d798656b594a274767d4c5811d0bb68fb00ff9de0533d763a0cb2ea5e4c4f9eb Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.574203 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.575550 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.583356 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.584022 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.584117 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.584174 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.584390 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-65tnt" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.584693 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.585496 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.594187 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.664744 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rc2tb"] Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.685679 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/100b22db-ec0d-40f0-975e-c86349b1890a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.685720 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.685753 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-config-data\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.685779 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.685831 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.685857 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.685889 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/100b22db-ec0d-40f0-975e-c86349b1890a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.685915 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.685941 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r6jt\" (UniqueName: \"kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-kube-api-access-8r6jt\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.686030 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.686130 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.787440 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/100b22db-ec0d-40f0-975e-c86349b1890a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.787485 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.787517 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r6jt\" (UniqueName: \"kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-kube-api-access-8r6jt\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.787557 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.787590 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.787630 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/100b22db-ec0d-40f0-975e-c86349b1890a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.787663 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.787687 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-config-data\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.787719 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.787756 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.787791 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.788835 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.789612 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.789975 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-config-data\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.790835 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.790866 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.800564 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.801489 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jq69z" event={"ID":"2a65ef24-0e05-415a-b7b7-6b44012b6c66","Type":"ContainerStarted","Data":"d798656b594a274767d4c5811d0bb68fb00ff9de0533d763a0cb2ea5e4c4f9eb"} Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.801634 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.801732 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3a0a23a7e437b41ba232f4b8f97a57cdc4bd553de75aff653652d00d1601e57d/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.814785 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/100b22db-ec0d-40f0-975e-c86349b1890a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.815058 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/100b22db-ec0d-40f0-975e-c86349b1890a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.822527 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.828470 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r6jt\" (UniqueName: \"kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-kube-api-access-8r6jt\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.834989 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" event={"ID":"446e46b1-a8cc-40fc-8947-d49fd0241bdd","Type":"ContainerStarted","Data":"7220dd128d886e9688b5175122995a8880695f0636b7a8d1a0e6512849c77046"} Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.873288 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\") pod \"rabbitmq-server-0\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.911753 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.913357 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.918715 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.919027 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.919227 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.919405 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.920243 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-c7g8f" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.920512 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.920712 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.941718 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.951429 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.994795 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqnzl\" (UniqueName: \"kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-kube-api-access-zqnzl\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.994858 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a575e00-cd12-498f-b8a4-0806737389d9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.994888 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a575e00-cd12-498f-b8a4-0806737389d9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.994947 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.994990 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.995012 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.995040 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.995078 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.995104 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.995192 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:49 crc kubenswrapper[4713]: I0126 15:53:49.995534 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.097234 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.097311 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.097346 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.097396 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.097425 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.097483 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqnzl\" (UniqueName: \"kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-kube-api-access-zqnzl\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.097517 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a575e00-cd12-498f-b8a4-0806737389d9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.097542 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a575e00-cd12-498f-b8a4-0806737389d9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.097600 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.097648 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.097673 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.102354 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.103356 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.103556 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.103596 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.105617 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.106537 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a575e00-cd12-498f-b8a4-0806737389d9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.108236 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.112881 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.112926 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/12a97f487943d57b18987a444e059a363b72befbbd881b9c31da3513a8331d3d/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.115920 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a575e00-cd12-498f-b8a4-0806737389d9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.124129 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.134522 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqnzl\" (UniqueName: \"kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-kube-api-access-zqnzl\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.160530 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.262860 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.473861 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.916638 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.920212 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.923528 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.923899 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.924165 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-9786v" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.924331 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.932556 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 26 15:53:50 crc kubenswrapper[4713]: I0126 15:53:50.934622 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.012449 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cf79fdc1-80c7-4f65-98e0-b08803c07edc-kolla-config\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.012538 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-97f11315-20cb-468a-97e2-2cbf5f3793bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97f11315-20cb-468a-97e2-2cbf5f3793bd\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.012619 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cf79fdc1-80c7-4f65-98e0-b08803c07edc-config-data-default\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.012709 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf79fdc1-80c7-4f65-98e0-b08803c07edc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.015840 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jq7t\" (UniqueName: \"kubernetes.io/projected/cf79fdc1-80c7-4f65-98e0-b08803c07edc-kube-api-access-5jq7t\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.015892 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf79fdc1-80c7-4f65-98e0-b08803c07edc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.015946 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf79fdc1-80c7-4f65-98e0-b08803c07edc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.015995 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cf79fdc1-80c7-4f65-98e0-b08803c07edc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.117825 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jq7t\" (UniqueName: \"kubernetes.io/projected/cf79fdc1-80c7-4f65-98e0-b08803c07edc-kube-api-access-5jq7t\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.118163 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf79fdc1-80c7-4f65-98e0-b08803c07edc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.120188 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf79fdc1-80c7-4f65-98e0-b08803c07edc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.120252 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf79fdc1-80c7-4f65-98e0-b08803c07edc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.120303 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cf79fdc1-80c7-4f65-98e0-b08803c07edc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.120409 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cf79fdc1-80c7-4f65-98e0-b08803c07edc-kolla-config\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.120445 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-97f11315-20cb-468a-97e2-2cbf5f3793bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97f11315-20cb-468a-97e2-2cbf5f3793bd\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.120478 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cf79fdc1-80c7-4f65-98e0-b08803c07edc-config-data-default\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.120523 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf79fdc1-80c7-4f65-98e0-b08803c07edc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.121515 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cf79fdc1-80c7-4f65-98e0-b08803c07edc-kolla-config\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.121925 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cf79fdc1-80c7-4f65-98e0-b08803c07edc-config-data-default\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.122175 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cf79fdc1-80c7-4f65-98e0-b08803c07edc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.125044 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.125091 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-97f11315-20cb-468a-97e2-2cbf5f3793bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97f11315-20cb-468a-97e2-2cbf5f3793bd\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/45eb9a40ed9f927baf69bf4891c334e72621dd3656ebf1ed82ac9cb636690aee/globalmount\"" pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.127255 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf79fdc1-80c7-4f65-98e0-b08803c07edc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.141019 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jq7t\" (UniqueName: \"kubernetes.io/projected/cf79fdc1-80c7-4f65-98e0-b08803c07edc-kube-api-access-5jq7t\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.147846 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf79fdc1-80c7-4f65-98e0-b08803c07edc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.195349 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-97f11315-20cb-468a-97e2-2cbf5f3793bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97f11315-20cb-468a-97e2-2cbf5f3793bd\") pod \"openstack-galera-0\" (UID: \"cf79fdc1-80c7-4f65-98e0-b08803c07edc\") " pod="openstack/openstack-galera-0" Jan 26 15:53:51 crc kubenswrapper[4713]: I0126 15:53:51.250708 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.312835 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.315013 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.318546 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.326119 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.336073 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.336164 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-q88pg" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.336978 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.451644 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.451720 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-031ce4b9-ed23-4e03-a7ff-4093d867fbeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-031ce4b9-ed23-4e03-a7ff-4093d867fbeb\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.451773 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.451863 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.451919 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.451943 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs299\" (UniqueName: \"kubernetes.io/projected/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-kube-api-access-vs299\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.451990 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.452070 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.553491 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.553853 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-031ce4b9-ed23-4e03-a7ff-4093d867fbeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-031ce4b9-ed23-4e03-a7ff-4093d867fbeb\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.553892 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.553973 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.554023 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.554048 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs299\" (UniqueName: \"kubernetes.io/projected/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-kube-api-access-vs299\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.554080 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.554162 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.555105 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.555777 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.553972 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.556546 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.558505 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.558542 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-031ce4b9-ed23-4e03-a7ff-4093d867fbeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-031ce4b9-ed23-4e03-a7ff-4093d867fbeb\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/81f4e70be4e0396193db00104f30f3fc57d77413b150d783cf1ef4185b3fbc5f/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.561676 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.581893 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.611056 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs299\" (UniqueName: \"kubernetes.io/projected/5bba60c2-25f6-41a7-a231-51fc5a6a9d3b-kube-api-access-vs299\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.611151 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.612361 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.618119 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.618403 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.619214 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.620049 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-58txz" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.646903 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-031ce4b9-ed23-4e03-a7ff-4093d867fbeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-031ce4b9-ed23-4e03-a7ff-4093d867fbeb\") pod \"openstack-cell1-galera-0\" (UID: \"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.657129 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6637e535-e95f-407f-a97d-11da8ad9629c-config-data\") pod \"memcached-0\" (UID: \"6637e535-e95f-407f-a97d-11da8ad9629c\") " pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.657238 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6637e535-e95f-407f-a97d-11da8ad9629c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6637e535-e95f-407f-a97d-11da8ad9629c\") " pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.657550 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc6wb\" (UniqueName: \"kubernetes.io/projected/6637e535-e95f-407f-a97d-11da8ad9629c-kube-api-access-cc6wb\") pod \"memcached-0\" (UID: \"6637e535-e95f-407f-a97d-11da8ad9629c\") " pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.657611 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6637e535-e95f-407f-a97d-11da8ad9629c-kolla-config\") pod \"memcached-0\" (UID: \"6637e535-e95f-407f-a97d-11da8ad9629c\") " pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.657633 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6637e535-e95f-407f-a97d-11da8ad9629c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6637e535-e95f-407f-a97d-11da8ad9629c\") " pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.674704 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.762981 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc6wb\" (UniqueName: \"kubernetes.io/projected/6637e535-e95f-407f-a97d-11da8ad9629c-kube-api-access-cc6wb\") pod \"memcached-0\" (UID: \"6637e535-e95f-407f-a97d-11da8ad9629c\") " pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.763080 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6637e535-e95f-407f-a97d-11da8ad9629c-kolla-config\") pod \"memcached-0\" (UID: \"6637e535-e95f-407f-a97d-11da8ad9629c\") " pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.763098 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6637e535-e95f-407f-a97d-11da8ad9629c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6637e535-e95f-407f-a97d-11da8ad9629c\") " pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.763169 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6637e535-e95f-407f-a97d-11da8ad9629c-config-data\") pod \"memcached-0\" (UID: \"6637e535-e95f-407f-a97d-11da8ad9629c\") " pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.763258 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6637e535-e95f-407f-a97d-11da8ad9629c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6637e535-e95f-407f-a97d-11da8ad9629c\") " pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.765631 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6637e535-e95f-407f-a97d-11da8ad9629c-kolla-config\") pod \"memcached-0\" (UID: \"6637e535-e95f-407f-a97d-11da8ad9629c\") " pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.768728 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6637e535-e95f-407f-a97d-11da8ad9629c-config-data\") pod \"memcached-0\" (UID: \"6637e535-e95f-407f-a97d-11da8ad9629c\") " pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.774610 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6637e535-e95f-407f-a97d-11da8ad9629c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6637e535-e95f-407f-a97d-11da8ad9629c\") " pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.786248 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc6wb\" (UniqueName: \"kubernetes.io/projected/6637e535-e95f-407f-a97d-11da8ad9629c-kube-api-access-cc6wb\") pod \"memcached-0\" (UID: \"6637e535-e95f-407f-a97d-11da8ad9629c\") " pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.786906 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6637e535-e95f-407f-a97d-11da8ad9629c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6637e535-e95f-407f-a97d-11da8ad9629c\") " pod="openstack/memcached-0" Jan 26 15:53:52 crc kubenswrapper[4713]: I0126 15:53:52.977539 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.224706 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.226755 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.231047 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-rkpsp" Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.252454 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.300243 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx4pj\" (UniqueName: \"kubernetes.io/projected/2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4-kube-api-access-gx4pj\") pod \"kube-state-metrics-0\" (UID: \"2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4\") " pod="openstack/kube-state-metrics-0" Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.402158 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx4pj\" (UniqueName: \"kubernetes.io/projected/2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4-kube-api-access-gx4pj\") pod \"kube-state-metrics-0\" (UID: \"2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4\") " pod="openstack/kube-state-metrics-0" Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.434801 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx4pj\" (UniqueName: \"kubernetes.io/projected/2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4-kube-api-access-gx4pj\") pod \"kube-state-metrics-0\" (UID: \"2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4\") " pod="openstack/kube-state-metrics-0" Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.584999 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.964286 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.965953 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.967830 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.968176 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.968210 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.969310 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.970273 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-w6wxh" Jan 26 15:53:54 crc kubenswrapper[4713]: I0126 15:53:54.992685 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.017826 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.017990 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.018044 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.018164 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfq2d\" (UniqueName: \"kubernetes.io/projected/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-kube-api-access-sfq2d\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.018199 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.018221 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.018439 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.119842 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfq2d\" (UniqueName: \"kubernetes.io/projected/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-kube-api-access-sfq2d\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.119908 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.119931 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.119953 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.119986 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.120038 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.120068 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.122229 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.124960 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.126715 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.128487 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.128937 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.129343 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.148055 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfq2d\" (UniqueName: \"kubernetes.io/projected/a25c5d9b-6658-4b9a-8fe7-fb4b3714696e-kube-api-access-sfq2d\") pod \"alertmanager-metric-storage-0\" (UID: \"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.297894 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.363467 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.365349 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.367751 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.368039 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.368222 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-hsnpc" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.368401 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.368563 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.369716 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.371182 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.373406 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.393309 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.423887 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k7lz\" (UniqueName: \"kubernetes.io/projected/78543593-d6da-448f-adf7-e1ead58bfb5f-kube-api-access-2k7lz\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.423975 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.424040 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.424067 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.424219 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/78543593-d6da-448f-adf7-e1ead58bfb5f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.424284 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.424408 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.424467 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.424495 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/78543593-d6da-448f-adf7-e1ead58bfb5f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.424626 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-config\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.526297 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-config\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.526356 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k7lz\" (UniqueName: \"kubernetes.io/projected/78543593-d6da-448f-adf7-e1ead58bfb5f-kube-api-access-2k7lz\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.526423 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.526470 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.526492 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.526518 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/78543593-d6da-448f-adf7-e1ead58bfb5f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.526542 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.526565 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.526588 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.526609 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/78543593-d6da-448f-adf7-e1ead58bfb5f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.527665 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.527908 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.527959 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.531316 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-config\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.532228 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/78543593-d6da-448f-adf7-e1ead58bfb5f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.541879 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.541955 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a3a47662dc62deaa080ed91fdc8d2453be14d746889aa742379c7becfb263ca9/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.544192 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/78543593-d6da-448f-adf7-e1ead58bfb5f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.545057 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.545339 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k7lz\" (UniqueName: \"kubernetes.io/projected/78543593-d6da-448f-adf7-e1ead58bfb5f-kube-api-access-2k7lz\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.546826 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.573232 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\") pod \"prometheus-metric-storage-0\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:55 crc kubenswrapper[4713]: I0126 15:53:55.690548 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.562837 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c9tvd"] Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.564299 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.571892 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7fsk8" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.572285 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.581642 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.584563 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rl7z9"] Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.586315 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.593234 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9tvd"] Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.649570 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rl7z9"] Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.663920 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/518d38d7-b30e-4d67-a3d7-456e26fc9869-var-run\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.663976 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/518d38d7-b30e-4d67-a3d7-456e26fc9869-var-run-ovn\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.664007 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d161dabd-5253-4929-998e-07f3d465a03d-var-run\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.664044 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/518d38d7-b30e-4d67-a3d7-456e26fc9869-ovn-controller-tls-certs\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.664110 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d161dabd-5253-4929-998e-07f3d465a03d-var-log\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.664194 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/518d38d7-b30e-4d67-a3d7-456e26fc9869-scripts\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.664263 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/518d38d7-b30e-4d67-a3d7-456e26fc9869-combined-ca-bundle\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.664612 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d161dabd-5253-4929-998e-07f3d465a03d-var-lib\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.664670 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d161dabd-5253-4929-998e-07f3d465a03d-scripts\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.664760 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjhsv\" (UniqueName: \"kubernetes.io/projected/518d38d7-b30e-4d67-a3d7-456e26fc9869-kube-api-access-fjhsv\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.664862 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls7nz\" (UniqueName: \"kubernetes.io/projected/d161dabd-5253-4929-998e-07f3d465a03d-kube-api-access-ls7nz\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.664925 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/518d38d7-b30e-4d67-a3d7-456e26fc9869-var-log-ovn\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.664958 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d161dabd-5253-4929-998e-07f3d465a03d-etc-ovs\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.766808 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d161dabd-5253-4929-998e-07f3d465a03d-var-lib\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.766894 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d161dabd-5253-4929-998e-07f3d465a03d-scripts\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.766975 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjhsv\" (UniqueName: \"kubernetes.io/projected/518d38d7-b30e-4d67-a3d7-456e26fc9869-kube-api-access-fjhsv\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.767042 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls7nz\" (UniqueName: \"kubernetes.io/projected/d161dabd-5253-4929-998e-07f3d465a03d-kube-api-access-ls7nz\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.767101 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/518d38d7-b30e-4d67-a3d7-456e26fc9869-var-log-ovn\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.767131 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d161dabd-5253-4929-998e-07f3d465a03d-etc-ovs\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.767205 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/518d38d7-b30e-4d67-a3d7-456e26fc9869-var-run\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.767231 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/518d38d7-b30e-4d67-a3d7-456e26fc9869-var-run-ovn\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.767261 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d161dabd-5253-4929-998e-07f3d465a03d-var-run\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.767289 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/518d38d7-b30e-4d67-a3d7-456e26fc9869-ovn-controller-tls-certs\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.767322 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d161dabd-5253-4929-998e-07f3d465a03d-var-log\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.767347 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/518d38d7-b30e-4d67-a3d7-456e26fc9869-scripts\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.767404 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/518d38d7-b30e-4d67-a3d7-456e26fc9869-combined-ca-bundle\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.769085 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d161dabd-5253-4929-998e-07f3d465a03d-var-run\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.769136 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d161dabd-5253-4929-998e-07f3d465a03d-var-lib\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.769157 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d161dabd-5253-4929-998e-07f3d465a03d-etc-ovs\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.769226 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/518d38d7-b30e-4d67-a3d7-456e26fc9869-var-run-ovn\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.769247 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d161dabd-5253-4929-998e-07f3d465a03d-var-log\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.769452 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/518d38d7-b30e-4d67-a3d7-456e26fc9869-var-log-ovn\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.769479 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/518d38d7-b30e-4d67-a3d7-456e26fc9869-var-run\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.770989 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d161dabd-5253-4929-998e-07f3d465a03d-scripts\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.771364 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/518d38d7-b30e-4d67-a3d7-456e26fc9869-scripts\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.772973 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/518d38d7-b30e-4d67-a3d7-456e26fc9869-combined-ca-bundle\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.786565 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/518d38d7-b30e-4d67-a3d7-456e26fc9869-ovn-controller-tls-certs\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.787331 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls7nz\" (UniqueName: \"kubernetes.io/projected/d161dabd-5253-4929-998e-07f3d465a03d-kube-api-access-ls7nz\") pod \"ovn-controller-ovs-rl7z9\" (UID: \"d161dabd-5253-4929-998e-07f3d465a03d\") " pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.787589 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjhsv\" (UniqueName: \"kubernetes.io/projected/518d38d7-b30e-4d67-a3d7-456e26fc9869-kube-api-access-fjhsv\") pod \"ovn-controller-c9tvd\" (UID: \"518d38d7-b30e-4d67-a3d7-456e26fc9869\") " pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.888989 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9tvd" Jan 26 15:53:57 crc kubenswrapper[4713]: I0126 15:53:57.950192 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:53:58 crc kubenswrapper[4713]: I0126 15:53:58.917748 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"100b22db-ec0d-40f0-975e-c86349b1890a","Type":"ContainerStarted","Data":"382b983ae6a117f226b3d61bab487bc29048380342cba284172f6aa1fbcd11a0"} Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.185674 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.187996 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.191438 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.191738 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-svqf4" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.191931 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.192126 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.192293 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.206261 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.292057 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4567e561-0bd8-4368-8868-e2531d7bb8d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.292124 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4567e561-0bd8-4368-8868-e2531d7bb8d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.292210 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7krgd\" (UniqueName: \"kubernetes.io/projected/4567e561-0bd8-4368-8868-e2531d7bb8d3-kube-api-access-7krgd\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.292238 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4567e561-0bd8-4368-8868-e2531d7bb8d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.292269 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4567e561-0bd8-4368-8868-e2531d7bb8d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.292308 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4567e561-0bd8-4368-8868-e2531d7bb8d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.292487 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7a9b5e73-9754-4406-8a29-e58f04ae5fdc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a9b5e73-9754-4406-8a29-e58f04ae5fdc\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.292520 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4567e561-0bd8-4368-8868-e2531d7bb8d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.393324 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7a9b5e73-9754-4406-8a29-e58f04ae5fdc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a9b5e73-9754-4406-8a29-e58f04ae5fdc\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.393421 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4567e561-0bd8-4368-8868-e2531d7bb8d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.393467 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4567e561-0bd8-4368-8868-e2531d7bb8d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.393502 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4567e561-0bd8-4368-8868-e2531d7bb8d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.393561 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7krgd\" (UniqueName: \"kubernetes.io/projected/4567e561-0bd8-4368-8868-e2531d7bb8d3-kube-api-access-7krgd\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.393591 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4567e561-0bd8-4368-8868-e2531d7bb8d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.393622 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4567e561-0bd8-4368-8868-e2531d7bb8d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.393656 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4567e561-0bd8-4368-8868-e2531d7bb8d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.400552 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4567e561-0bd8-4368-8868-e2531d7bb8d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.404522 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4567e561-0bd8-4368-8868-e2531d7bb8d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.405563 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4567e561-0bd8-4368-8868-e2531d7bb8d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.406182 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4567e561-0bd8-4368-8868-e2531d7bb8d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.413909 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7krgd\" (UniqueName: \"kubernetes.io/projected/4567e561-0bd8-4368-8868-e2531d7bb8d3-kube-api-access-7krgd\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.415114 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4567e561-0bd8-4368-8868-e2531d7bb8d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.416518 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4567e561-0bd8-4368-8868-e2531d7bb8d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.428463 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.428515 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7a9b5e73-9754-4406-8a29-e58f04ae5fdc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a9b5e73-9754-4406-8a29-e58f04ae5fdc\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/95b7c6b1481343186228cbfbfae153c46127693df3a0e3042c6899d30db1370c/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.470898 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7a9b5e73-9754-4406-8a29-e58f04ae5fdc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a9b5e73-9754-4406-8a29-e58f04ae5fdc\") pod \"ovsdbserver-nb-0\" (UID: \"4567e561-0bd8-4368-8868-e2531d7bb8d3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:53:59 crc kubenswrapper[4713]: I0126 15:53:59.523278 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 15:54:02 crc kubenswrapper[4713]: I0126 15:54:02.485499 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.207137 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml"] Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.208459 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.213029 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.213085 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-drk8s" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.213085 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.213187 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.214174 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.226377 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml"] Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.272454 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/a1acb746-e41c-4b08-aefb-1277d7e710c9-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-qngml\" (UID: \"a1acb746-e41c-4b08-aefb-1277d7e710c9\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.272515 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vctp\" (UniqueName: \"kubernetes.io/projected/a1acb746-e41c-4b08-aefb-1277d7e710c9-kube-api-access-5vctp\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-qngml\" (UID: \"a1acb746-e41c-4b08-aefb-1277d7e710c9\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.272631 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1acb746-e41c-4b08-aefb-1277d7e710c9-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-qngml\" (UID: \"a1acb746-e41c-4b08-aefb-1277d7e710c9\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.272678 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/a1acb746-e41c-4b08-aefb-1277d7e710c9-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-qngml\" (UID: \"a1acb746-e41c-4b08-aefb-1277d7e710c9\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.272761 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1acb746-e41c-4b08-aefb-1277d7e710c9-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-qngml\" (UID: \"a1acb746-e41c-4b08-aefb-1277d7e710c9\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.303032 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.308938 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.310421 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.310514 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.310602 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.313180 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-57xl6" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.313203 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.316389 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90772569024cad074f2b7eff5e4a439736928d25bdd915e9b6f3f6c1f8edbe62"} pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.316498 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" containerID="cri-o://90772569024cad074f2b7eff5e4a439736928d25bdd915e9b6f3f6c1f8edbe62" gracePeriod=600 Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.317073 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.317513 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.365402 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.382815 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a0b03b5-597a-4c59-9784-218e9f9442d1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.382907 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a0b03b5-597a-4c59-9784-218e9f9442d1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.382938 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a0b03b5-597a-4c59-9784-218e9f9442d1-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.382960 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1acb746-e41c-4b08-aefb-1277d7e710c9-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-qngml\" (UID: \"a1acb746-e41c-4b08-aefb-1277d7e710c9\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.382982 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a0b03b5-597a-4c59-9784-218e9f9442d1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.383011 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/a1acb746-e41c-4b08-aefb-1277d7e710c9-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-qngml\" (UID: \"a1acb746-e41c-4b08-aefb-1277d7e710c9\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.383028 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-924ef930-2cef-4237-b983-acc43b81cacb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-924ef930-2cef-4237-b983-acc43b81cacb\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.383045 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0b03b5-597a-4c59-9784-218e9f9442d1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.383060 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h55rw\" (UniqueName: \"kubernetes.io/projected/4a0b03b5-597a-4c59-9784-218e9f9442d1-kube-api-access-h55rw\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.383084 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1acb746-e41c-4b08-aefb-1277d7e710c9-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-qngml\" (UID: \"a1acb746-e41c-4b08-aefb-1277d7e710c9\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.383100 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a0b03b5-597a-4c59-9784-218e9f9442d1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.383131 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/a1acb746-e41c-4b08-aefb-1277d7e710c9-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-qngml\" (UID: \"a1acb746-e41c-4b08-aefb-1277d7e710c9\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.383150 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vctp\" (UniqueName: \"kubernetes.io/projected/a1acb746-e41c-4b08-aefb-1277d7e710c9-kube-api-access-5vctp\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-qngml\" (UID: \"a1acb746-e41c-4b08-aefb-1277d7e710c9\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.385134 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1acb746-e41c-4b08-aefb-1277d7e710c9-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-qngml\" (UID: \"a1acb746-e41c-4b08-aefb-1277d7e710c9\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.393446 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1acb746-e41c-4b08-aefb-1277d7e710c9-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-qngml\" (UID: \"a1acb746-e41c-4b08-aefb-1277d7e710c9\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.396392 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/a1acb746-e41c-4b08-aefb-1277d7e710c9-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-qngml\" (UID: \"a1acb746-e41c-4b08-aefb-1277d7e710c9\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.398494 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/a1acb746-e41c-4b08-aefb-1277d7e710c9-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-qngml\" (UID: \"a1acb746-e41c-4b08-aefb-1277d7e710c9\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.421627 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9"] Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.424350 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.430006 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.430189 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.430293 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.431275 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vctp\" (UniqueName: \"kubernetes.io/projected/a1acb746-e41c-4b08-aefb-1277d7e710c9-kube-api-access-5vctp\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-qngml\" (UID: \"a1acb746-e41c-4b08-aefb-1277d7e710c9\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.448316 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9"] Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.486207 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47dae24-9ea7-4625-a367-43fd29037227-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.486278 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a0b03b5-597a-4c59-9784-218e9f9442d1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.486344 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f47dae24-9ea7-4625-a367-43fd29037227-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.486406 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a0b03b5-597a-4c59-9784-218e9f9442d1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.486441 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/f47dae24-9ea7-4625-a367-43fd29037227-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.486467 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a0b03b5-597a-4c59-9784-218e9f9442d1-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.486493 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a0b03b5-597a-4c59-9784-218e9f9442d1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.486515 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbfqs\" (UniqueName: \"kubernetes.io/projected/f47dae24-9ea7-4625-a367-43fd29037227-kube-api-access-hbfqs\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.486544 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f47dae24-9ea7-4625-a367-43fd29037227-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.487027 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/f47dae24-9ea7-4625-a367-43fd29037227-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.487170 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-924ef930-2cef-4237-b983-acc43b81cacb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-924ef930-2cef-4237-b983-acc43b81cacb\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.487302 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0b03b5-597a-4c59-9784-218e9f9442d1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.487707 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h55rw\" (UniqueName: \"kubernetes.io/projected/4a0b03b5-597a-4c59-9784-218e9f9442d1-kube-api-access-h55rw\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.487825 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a0b03b5-597a-4c59-9784-218e9f9442d1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.495990 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a0b03b5-597a-4c59-9784-218e9f9442d1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.496254 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a0b03b5-597a-4c59-9784-218e9f9442d1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.501871 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a0b03b5-597a-4c59-9784-218e9f9442d1-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.510334 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68"] Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.511824 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0b03b5-597a-4c59-9784-218e9f9442d1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.523379 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a0b03b5-597a-4c59-9784-218e9f9442d1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.524658 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.527498 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.527538 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-924ef930-2cef-4237-b983-acc43b81cacb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-924ef930-2cef-4237-b983-acc43b81cacb\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bb988aa81999c0f84dd7110ad850d99570b81d507af375da3b1a59da781eeb6a/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.531492 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.531706 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.533467 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a0b03b5-597a-4c59-9784-218e9f9442d1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.535617 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h55rw\" (UniqueName: \"kubernetes.io/projected/4a0b03b5-597a-4c59-9784-218e9f9442d1-kube-api-access-h55rw\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.538268 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.543646 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68"] Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.592102 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47dae24-9ea7-4625-a367-43fd29037227-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.592195 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f47dae24-9ea7-4625-a367-43fd29037227-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.592251 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/f47dae24-9ea7-4625-a367-43fd29037227-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.592282 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbfqs\" (UniqueName: \"kubernetes.io/projected/f47dae24-9ea7-4625-a367-43fd29037227-kube-api-access-hbfqs\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.592315 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f47dae24-9ea7-4625-a367-43fd29037227-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.592352 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/f47dae24-9ea7-4625-a367-43fd29037227-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.594021 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f47dae24-9ea7-4625-a367-43fd29037227-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.594913 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47dae24-9ea7-4625-a367-43fd29037227-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.597325 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/f47dae24-9ea7-4625-a367-43fd29037227-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.600898 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/f47dae24-9ea7-4625-a367-43fd29037227-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.600966 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f47dae24-9ea7-4625-a367-43fd29037227-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.616537 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-924ef930-2cef-4237-b983-acc43b81cacb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-924ef930-2cef-4237-b983-acc43b81cacb\") pod \"ovsdbserver-sb-0\" (UID: \"4a0b03b5-597a-4c59-9784-218e9f9442d1\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.624659 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbfqs\" (UniqueName: \"kubernetes.io/projected/f47dae24-9ea7-4625-a367-43fd29037227-kube-api-access-hbfqs\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-hvlk9\" (UID: \"f47dae24-9ea7-4625-a367-43fd29037227\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.661131 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.693122 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx"] Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.694175 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.695802 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/deeee241-0904-4385-b17a-b390dfc5b2d4-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-xdr68\" (UID: \"deeee241-0904-4385-b17a-b390dfc5b2d4\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.695840 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/deeee241-0904-4385-b17a-b390dfc5b2d4-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-xdr68\" (UID: \"deeee241-0904-4385-b17a-b390dfc5b2d4\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.695922 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/deeee241-0904-4385-b17a-b390dfc5b2d4-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-xdr68\" (UID: \"deeee241-0904-4385-b17a-b390dfc5b2d4\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.696301 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deeee241-0904-4385-b17a-b390dfc5b2d4-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-xdr68\" (UID: \"deeee241-0904-4385-b17a-b390dfc5b2d4\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.696331 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rtkn\" (UniqueName: \"kubernetes.io/projected/deeee241-0904-4385-b17a-b390dfc5b2d4-kube-api-access-6rtkn\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-xdr68\" (UID: \"deeee241-0904-4385-b17a-b390dfc5b2d4\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.731099 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.731236 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.731285 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.731413 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.731101 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.731553 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.731623 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-wqksh" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.778471 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx"] Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.784781 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.792123 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb"] Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.793616 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.798029 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/912dd8bd-b0f7-441d-82fe-547964030ae5-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.798133 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/912dd8bd-b0f7-441d-82fe-547964030ae5-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.798203 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjkmz\" (UniqueName: \"kubernetes.io/projected/912dd8bd-b0f7-441d-82fe-547964030ae5-kube-api-access-xjkmz\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.798275 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/deeee241-0904-4385-b17a-b390dfc5b2d4-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-xdr68\" (UID: \"deeee241-0904-4385-b17a-b390dfc5b2d4\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.798326 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deeee241-0904-4385-b17a-b390dfc5b2d4-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-xdr68\" (UID: \"deeee241-0904-4385-b17a-b390dfc5b2d4\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.798395 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rtkn\" (UniqueName: \"kubernetes.io/projected/deeee241-0904-4385-b17a-b390dfc5b2d4-kube-api-access-6rtkn\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-xdr68\" (UID: \"deeee241-0904-4385-b17a-b390dfc5b2d4\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.798507 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.802536 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/912dd8bd-b0f7-441d-82fe-547964030ae5-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.802692 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/deeee241-0904-4385-b17a-b390dfc5b2d4-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-xdr68\" (UID: \"deeee241-0904-4385-b17a-b390dfc5b2d4\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.802733 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/deeee241-0904-4385-b17a-b390dfc5b2d4-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-xdr68\" (UID: \"deeee241-0904-4385-b17a-b390dfc5b2d4\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.802765 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/912dd8bd-b0f7-441d-82fe-547964030ae5-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.802782 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/912dd8bd-b0f7-441d-82fe-547964030ae5-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.802806 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.802827 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.802844 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/912dd8bd-b0f7-441d-82fe-547964030ae5-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.802893 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/912dd8bd-b0f7-441d-82fe-547964030ae5-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.802965 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.802988 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.803009 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.803038 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.803072 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/912dd8bd-b0f7-441d-82fe-547964030ae5-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.803131 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67zqp\" (UniqueName: \"kubernetes.io/projected/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-kube-api-access-67zqp\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.803163 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.803594 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deeee241-0904-4385-b17a-b390dfc5b2d4-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-xdr68\" (UID: \"deeee241-0904-4385-b17a-b390dfc5b2d4\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.804292 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/deeee241-0904-4385-b17a-b390dfc5b2d4-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-xdr68\" (UID: \"deeee241-0904-4385-b17a-b390dfc5b2d4\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.809502 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/deeee241-0904-4385-b17a-b390dfc5b2d4-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-xdr68\" (UID: \"deeee241-0904-4385-b17a-b390dfc5b2d4\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.812956 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/deeee241-0904-4385-b17a-b390dfc5b2d4-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-xdr68\" (UID: \"deeee241-0904-4385-b17a-b390dfc5b2d4\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.836220 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb"] Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.841061 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rtkn\" (UniqueName: \"kubernetes.io/projected/deeee241-0904-4385-b17a-b390dfc5b2d4-kube-api-access-6rtkn\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-xdr68\" (UID: \"deeee241-0904-4385-b17a-b390dfc5b2d4\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.911821 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913170 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913210 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/912dd8bd-b0f7-441d-82fe-547964030ae5-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913321 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/912dd8bd-b0f7-441d-82fe-547964030ae5-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913344 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/912dd8bd-b0f7-441d-82fe-547964030ae5-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913451 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913483 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913508 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/912dd8bd-b0f7-441d-82fe-547964030ae5-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913563 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/912dd8bd-b0f7-441d-82fe-547964030ae5-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913609 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913642 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913667 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913721 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913763 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/912dd8bd-b0f7-441d-82fe-547964030ae5-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913805 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67zqp\" (UniqueName: \"kubernetes.io/projected/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-kube-api-access-67zqp\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913844 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913873 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/912dd8bd-b0f7-441d-82fe-547964030ae5-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913916 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/912dd8bd-b0f7-441d-82fe-547964030ae5-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.913940 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjkmz\" (UniqueName: \"kubernetes.io/projected/912dd8bd-b0f7-441d-82fe-547964030ae5-kube-api-access-xjkmz\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.914039 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.916723 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.917108 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/912dd8bd-b0f7-441d-82fe-547964030ae5-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.917967 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.918104 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/912dd8bd-b0f7-441d-82fe-547964030ae5-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.919852 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/912dd8bd-b0f7-441d-82fe-547964030ae5-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.919905 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.921198 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.921880 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.922442 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.923562 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/912dd8bd-b0f7-441d-82fe-547964030ae5-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.928746 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/912dd8bd-b0f7-441d-82fe-547964030ae5-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.928891 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/912dd8bd-b0f7-441d-82fe-547964030ae5-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.930288 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/912dd8bd-b0f7-441d-82fe-547964030ae5-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.937269 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.940865 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67zqp\" (UniqueName: \"kubernetes.io/projected/d4f06dea-6c6e-4c23-a3e0-c10144d7338c-kube-api-access-67zqp\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx\" (UID: \"d4f06dea-6c6e-4c23-a3e0-c10144d7338c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.940907 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/912dd8bd-b0f7-441d-82fe-547964030ae5-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.941797 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjkmz\" (UniqueName: \"kubernetes.io/projected/912dd8bd-b0f7-441d-82fe-547964030ae5-kube-api-access-xjkmz\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-rstdb\" (UID: \"912dd8bd-b0f7-441d-82fe-547964030ae5\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.974644 4713 generic.go:334] "Generic (PLEG): container finished" podID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerID="90772569024cad074f2b7eff5e4a439736928d25bdd915e9b6f3f6c1f8edbe62" exitCode=0 Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.974687 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerDied","Data":"90772569024cad074f2b7eff5e4a439736928d25bdd915e9b6f3f6c1f8edbe62"} Jan 26 15:54:03 crc kubenswrapper[4713]: I0126 15:54:03.974718 4713 scope.go:117] "RemoveContainer" containerID="f3174ffab26223a39cf8575650c8eb910e6234e36fda4aca35e1d463b1d024ff" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.062207 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.113532 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.398981 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.401329 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.405416 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.406893 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.413960 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.425407 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.475135 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.491977 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.494867 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.518578 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.519174 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.529814 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.530115 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz7nc\" (UniqueName: \"kubernetes.io/projected/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-kube-api-access-rz7nc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.530207 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.530310 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.530460 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.530607 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.530722 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.530807 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.530908 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.610319 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.634583 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.635461 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz7nc\" (UniqueName: \"kubernetes.io/projected/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-kube-api-access-rz7nc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.635626 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn9dd\" (UniqueName: \"kubernetes.io/projected/9144f526-8060-4b3b-bf78-26babcd1d963-kube-api-access-kn9dd\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.635742 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/9144f526-8060-4b3b-bf78-26babcd1d963-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.635871 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.636092 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.636210 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/9144f526-8060-4b3b-bf78-26babcd1d963-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.636328 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/9144f526-8060-4b3b-bf78-26babcd1d963-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.636485 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.636513 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.636735 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.637662 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.637861 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.638010 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9144f526-8060-4b3b-bf78-26babcd1d963-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.638200 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9144f526-8060-4b3b-bf78-26babcd1d963-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.645764 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.662057 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.662977 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.664054 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.664603 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.669502 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.671663 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.675317 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz7nc\" (UniqueName: \"kubernetes.io/projected/a45d2a2d-be1b-476e-8fbf-f9bdd5a97301-kube-api-access-rz7nc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.675637 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.675870 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.687912 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.746611 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.767515 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/ba185c6c-eecc-45d1-adef-b3bd7fa84686-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.767579 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba185c6c-eecc-45d1-adef-b3bd7fa84686-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.767613 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn9dd\" (UniqueName: \"kubernetes.io/projected/9144f526-8060-4b3b-bf78-26babcd1d963-kube-api-access-kn9dd\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.767639 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/9144f526-8060-4b3b-bf78-26babcd1d963-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.767704 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ba185c6c-eecc-45d1-adef-b3bd7fa84686-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.767741 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/ba185c6c-eecc-45d1-adef-b3bd7fa84686-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.767763 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba185c6c-eecc-45d1-adef-b3bd7fa84686-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.767786 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/9144f526-8060-4b3b-bf78-26babcd1d963-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.767807 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/9144f526-8060-4b3b-bf78-26babcd1d963-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.767927 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9144f526-8060-4b3b-bf78-26babcd1d963-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.767956 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9144f526-8060-4b3b-bf78-26babcd1d963-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.767989 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.768036 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.768068 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pshv\" (UniqueName: \"kubernetes.io/projected/ba185c6c-eecc-45d1-adef-b3bd7fa84686-kube-api-access-6pshv\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.768711 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.769224 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9144f526-8060-4b3b-bf78-26babcd1d963-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.770319 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9144f526-8060-4b3b-bf78-26babcd1d963-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.774042 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.774648 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/9144f526-8060-4b3b-bf78-26babcd1d963-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.795173 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/9144f526-8060-4b3b-bf78-26babcd1d963-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.807324 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn9dd\" (UniqueName: \"kubernetes.io/projected/9144f526-8060-4b3b-bf78-26babcd1d963-kube-api-access-kn9dd\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.814165 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/9144f526-8060-4b3b-bf78-26babcd1d963-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.829228 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"9144f526-8060-4b3b-bf78-26babcd1d963\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.869884 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.869944 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pshv\" (UniqueName: \"kubernetes.io/projected/ba185c6c-eecc-45d1-adef-b3bd7fa84686-kube-api-access-6pshv\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.869971 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/ba185c6c-eecc-45d1-adef-b3bd7fa84686-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.869997 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba185c6c-eecc-45d1-adef-b3bd7fa84686-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.870043 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ba185c6c-eecc-45d1-adef-b3bd7fa84686-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.870067 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/ba185c6c-eecc-45d1-adef-b3bd7fa84686-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.870085 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba185c6c-eecc-45d1-adef-b3bd7fa84686-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.870994 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba185c6c-eecc-45d1-adef-b3bd7fa84686-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.871109 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.878494 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba185c6c-eecc-45d1-adef-b3bd7fa84686-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.882301 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ba185c6c-eecc-45d1-adef-b3bd7fa84686-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.887205 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/ba185c6c-eecc-45d1-adef-b3bd7fa84686-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.887274 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/ba185c6c-eecc-45d1-adef-b3bd7fa84686-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.897115 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pshv\" (UniqueName: \"kubernetes.io/projected/ba185c6c-eecc-45d1-adef-b3bd7fa84686-kube-api-access-6pshv\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:04 crc kubenswrapper[4713]: I0126 15:54:04.906690 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ba185c6c-eecc-45d1-adef-b3bd7fa84686\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:05 crc kubenswrapper[4713]: I0126 15:54:05.054105 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:05 crc kubenswrapper[4713]: I0126 15:54:05.126464 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:08 crc kubenswrapper[4713]: W0126 15:54:08.781276 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78543593_d6da_448f_adf7_e1ead58bfb5f.slice/crio-f52cf428a199a308bf8defe41efe92502fe7063994479305f414d0205ae5c0d9 WatchSource:0}: Error finding container f52cf428a199a308bf8defe41efe92502fe7063994479305f414d0205ae5c0d9: Status 404 returned error can't find the container with id f52cf428a199a308bf8defe41efe92502fe7063994479305f414d0205ae5c0d9 Jan 26 15:54:09 crc kubenswrapper[4713]: I0126 15:54:09.013923 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78543593-d6da-448f-adf7-e1ead58bfb5f","Type":"ContainerStarted","Data":"f52cf428a199a308bf8defe41efe92502fe7063994479305f414d0205ae5c0d9"} Jan 26 15:54:09 crc kubenswrapper[4713]: I0126 15:54:09.210048 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 26 15:54:12 crc kubenswrapper[4713]: E0126 15:54:12.536679 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 15:54:12 crc kubenswrapper[4713]: E0126 15:54:12.537257 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9ssh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-z6sbp_openstack(21773aec-9c2f-46ed-8057-27ae422f2536): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:54:12 crc kubenswrapper[4713]: E0126 15:54:12.538778 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" podUID="21773aec-9c2f-46ed-8057-27ae422f2536" Jan 26 15:54:12 crc kubenswrapper[4713]: E0126 15:54:12.629557 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 15:54:12 crc kubenswrapper[4713]: E0126 15:54:12.629676 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 15:54:12 crc kubenswrapper[4713]: E0126 15:54:12.629694 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fst5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-jq69z_openstack(2a65ef24-0e05-415a-b7b7-6b44012b6c66): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:54:12 crc kubenswrapper[4713]: E0126 15:54:12.629779 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hg7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-rc2tb_openstack(446e46b1-a8cc-40fc-8947-d49fd0241bdd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:54:12 crc kubenswrapper[4713]: E0126 15:54:12.629832 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 15:54:12 crc kubenswrapper[4713]: E0126 15:54:12.630014 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flcvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-kng8c_openstack(c458bb3d-b382-44a5-8bca-276644fa267b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:54:12 crc kubenswrapper[4713]: E0126 15:54:12.631352 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-jq69z" podUID="2a65ef24-0e05-415a-b7b7-6b44012b6c66" Jan 26 15:54:12 crc kubenswrapper[4713]: E0126 15:54:12.631773 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-kng8c" podUID="c458bb3d-b382-44a5-8bca-276644fa267b" Jan 26 15:54:12 crc kubenswrapper[4713]: E0126 15:54:12.631797 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" podUID="446e46b1-a8cc-40fc-8947-d49fd0241bdd" Jan 26 15:54:13 crc kubenswrapper[4713]: I0126 15:54:13.154846 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e","Type":"ContainerStarted","Data":"3dcd77aea3328e9a07665902fe0ac5d243e7719548e1cbe5b88603e808168edb"} Jan 26 15:54:13 crc kubenswrapper[4713]: E0126 15:54:13.168686 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-jq69z" podUID="2a65ef24-0e05-415a-b7b7-6b44012b6c66" Jan 26 15:54:13 crc kubenswrapper[4713]: E0126 15:54:13.181718 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" podUID="446e46b1-a8cc-40fc-8947-d49fd0241bdd" Jan 26 15:54:14 crc kubenswrapper[4713]: I0126 15:54:14.163740 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"42ffb45851c67f85ba43b543b337fa54564e1c75cb03fd91b387c5b7e98ba8b2"} Jan 26 15:54:14 crc kubenswrapper[4713]: I0126 15:54:14.525226 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 15:54:14 crc kubenswrapper[4713]: I0126 15:54:14.542522 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 15:54:14 crc kubenswrapper[4713]: I0126 15:54:14.614634 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.052616 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9"] Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.079485 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.124910 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx"] Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.177407 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9tvd"] Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.180030 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"100b22db-ec0d-40f0-975e-c86349b1890a","Type":"ContainerStarted","Data":"d9bc4cf0deeff3133fa6a3db72d690d889d4e333a291b97dc393485761a1f512"} Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.190475 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.197577 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.203594 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68"] Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.558153 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 15:54:15 crc kubenswrapper[4713]: W0126 15:54:15.582755 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9144f526_8060_4b3b_bf78_26babcd1d963.slice/crio-627f5f2d397efd367e847ce52c60ae3a3b5e87ac27ddb0e9dc40ebd653204421 WatchSource:0}: Error finding container 627f5f2d397efd367e847ce52c60ae3a3b5e87ac27ddb0e9dc40ebd653204421: Status 404 returned error can't find the container with id 627f5f2d397efd367e847ce52c60ae3a3b5e87ac27ddb0e9dc40ebd653204421 Jan 26 15:54:15 crc kubenswrapper[4713]: W0126 15:54:15.620791 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6637e535_e95f_407f_a97d_11da8ad9629c.slice/crio-61a0be71b485f03505d35b7b8969a02f90dd655d93a3c56efd8db4c8073e061a WatchSource:0}: Error finding container 61a0be71b485f03505d35b7b8969a02f90dd655d93a3c56efd8db4c8073e061a: Status 404 returned error can't find the container with id 61a0be71b485f03505d35b7b8969a02f90dd655d93a3c56efd8db4c8073e061a Jan 26 15:54:15 crc kubenswrapper[4713]: W0126 15:54:15.624930 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4567e561_0bd8_4368_8868_e2531d7bb8d3.slice/crio-02759ffa4d09e1e3ff9db9aac19b7a15a431e468cee675606f29c87454ecfec5 WatchSource:0}: Error finding container 02759ffa4d09e1e3ff9db9aac19b7a15a431e468cee675606f29c87454ecfec5: Status 404 returned error can't find the container with id 02759ffa4d09e1e3ff9db9aac19b7a15a431e468cee675606f29c87454ecfec5 Jan 26 15:54:15 crc kubenswrapper[4713]: W0126 15:54:15.628247 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod518d38d7_b30e_4d67_a3d7_456e26fc9869.slice/crio-28a03df8903c45c0bdf2b14cc37850ea7275db48b886a67a026494b461ee7c59 WatchSource:0}: Error finding container 28a03df8903c45c0bdf2b14cc37850ea7275db48b886a67a026494b461ee7c59: Status 404 returned error can't find the container with id 28a03df8903c45c0bdf2b14cc37850ea7275db48b886a67a026494b461ee7c59 Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.699558 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kng8c" Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.705681 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.754801 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb"] Jan 26 15:54:15 crc kubenswrapper[4713]: W0126 15:54:15.756015 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod912dd8bd_b0f7_441d_82fe_547964030ae5.slice/crio-bb75e34905a66e72ca398568ac526de39acdaf219417503d53152e4ac8d1ccc3 WatchSource:0}: Error finding container bb75e34905a66e72ca398568ac526de39acdaf219417503d53152e4ac8d1ccc3: Status 404 returned error can't find the container with id bb75e34905a66e72ca398568ac526de39acdaf219417503d53152e4ac8d1ccc3 Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.759290 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml"] Jan 26 15:54:15 crc kubenswrapper[4713]: E0126 15:54:15.760701 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:gateway,Image:registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:74d61619b9420655da84bc9939e37f76040b437a70e9c96eeb3267f00dfe88ad,Command:[],Args:[--debug.name=lokistack-gateway --web.listen=0.0.0.0:8080 --web.internal.listen=0.0.0.0:8081 --web.healthchecks.url=https://localhost:8080 --log.level=warn --logs.read.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.tail.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.write.endpoint=https://cloudkitty-lokistack-distributor-http.openstack.svc.cluster.local:3100 --logs.write-timeout=4m0s --rbac.config=/etc/lokistack-gateway/rbac.yaml --tenants.config=/etc/lokistack-gateway/tenants.yaml --server.read-timeout=48s --server.write-timeout=6m0s --tls.min-version=VersionTLS12 --tls.server.cert-file=/var/run/tls/http/server/tls.crt --tls.server.key-file=/var/run/tls/http/server/tls.key --tls.healthchecks.server-ca-file=/var/run/ca/server/service-ca.crt --tls.healthchecks.server-name=cloudkitty-lokistack-gateway-http.openstack.svc.cluster.local --tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt --tls.internal.server.key-file=/var/run/tls/http/server/tls.key --tls.min-version=VersionTLS12 --tls.cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logs.tls.ca-file=/var/run/ca/upstream/service-ca.crt --logs.tls.cert-file=/var/run/tls/http/upstream/tls.crt --logs.tls.key-file=/var/run/tls/http/upstream/tls.key --tls.client-auth-type=RequestClientCert],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},ContainerPort{Name:public,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rbac,ReadOnly:true,MountPath:/etc/lokistack-gateway/rbac.yaml,SubPath:rbac.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tenants,ReadOnly:true,MountPath:/etc/lokistack-gateway/tenants.yaml,SubPath:tenants.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lokistack-gateway,ReadOnly:true,MountPath:/etc/lokistack-gateway/lokistack-gateway.rego,SubPath:lokistack-gateway.rego,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-secret,ReadOnly:true,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-client-http,ReadOnly:true,MountPath:/var/run/tls/http/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-ca-bundle,ReadOnly:false,MountPath:/var/run/tenants-ca/cloudkitty,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjkmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/live,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-gateway-7db4f4db8c-rstdb_openstack(912dd8bd-b0f7-441d-82fe-547964030ae5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 15:54:15 crc kubenswrapper[4713]: W0126 15:54:15.764737 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1acb746_e41c_4b08_aefb_1277d7e710c9.slice/crio-02d5faf208b54056f7a7ca093b4b82e5984b282b9e4047d565dde3a06a692bce WatchSource:0}: Error finding container 02d5faf208b54056f7a7ca093b4b82e5984b282b9e4047d565dde3a06a692bce: Status 404 returned error can't find the container with id 02d5faf208b54056f7a7ca093b4b82e5984b282b9e4047d565dde3a06a692bce Jan 26 15:54:15 crc kubenswrapper[4713]: E0126 15:54:15.764800 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" podUID="912dd8bd-b0f7-441d-82fe-547964030ae5" Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.768329 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.776357 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Jan 26 15:54:15 crc kubenswrapper[4713]: E0126 15:54:15.798330 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-index-gateway,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2b491fcb180423632d30811515a439a7a7f41023c1cfe4780647f18969b85a1d,Command:[],Args:[-target=index-gateway -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pshv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-index-gateway-0_openstack(ba185c6c-eecc-45d1-adef-b3bd7fa84686): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 15:54:15 crc kubenswrapper[4713]: E0126 15:54:15.800741 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="ba185c6c-eecc-45d1-adef-b3bd7fa84686" Jan 26 15:54:15 crc kubenswrapper[4713]: E0126 15:54:15.823293 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-ingester,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2b491fcb180423632d30811515a439a7a7f41023c1cfe4780647f18969b85a1d,Command:[],Args:[-target=ingester -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:wal,ReadOnly:false,MountPath:/tmp/wal,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rz7nc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-ingester-0_openstack(a45d2a2d-be1b-476e-8fbf-f9bdd5a97301): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 15:54:15 crc kubenswrapper[4713]: E0126 15:54:15.824847 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="a45d2a2d-be1b-476e-8fbf-f9bdd5a97301" Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.991053 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21773aec-9c2f-46ed-8057-27ae422f2536-config\") pod \"21773aec-9c2f-46ed-8057-27ae422f2536\" (UID: \"21773aec-9c2f-46ed-8057-27ae422f2536\") " Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.991287 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21773aec-9c2f-46ed-8057-27ae422f2536-dns-svc\") pod \"21773aec-9c2f-46ed-8057-27ae422f2536\" (UID: \"21773aec-9c2f-46ed-8057-27ae422f2536\") " Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.991343 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c458bb3d-b382-44a5-8bca-276644fa267b-config\") pod \"c458bb3d-b382-44a5-8bca-276644fa267b\" (UID: \"c458bb3d-b382-44a5-8bca-276644fa267b\") " Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.991423 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flcvj\" (UniqueName: \"kubernetes.io/projected/c458bb3d-b382-44a5-8bca-276644fa267b-kube-api-access-flcvj\") pod \"c458bb3d-b382-44a5-8bca-276644fa267b\" (UID: \"c458bb3d-b382-44a5-8bca-276644fa267b\") " Jan 26 15:54:15 crc kubenswrapper[4713]: I0126 15:54:15.991477 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9ssh\" (UniqueName: \"kubernetes.io/projected/21773aec-9c2f-46ed-8057-27ae422f2536-kube-api-access-t9ssh\") pod \"21773aec-9c2f-46ed-8057-27ae422f2536\" (UID: \"21773aec-9c2f-46ed-8057-27ae422f2536\") " Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:15.995259 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21773aec-9c2f-46ed-8057-27ae422f2536-config" (OuterVolumeSpecName: "config") pod "21773aec-9c2f-46ed-8057-27ae422f2536" (UID: "21773aec-9c2f-46ed-8057-27ae422f2536"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:15.995717 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c458bb3d-b382-44a5-8bca-276644fa267b-config" (OuterVolumeSpecName: "config") pod "c458bb3d-b382-44a5-8bca-276644fa267b" (UID: "c458bb3d-b382-44a5-8bca-276644fa267b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:15.996121 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21773aec-9c2f-46ed-8057-27ae422f2536-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "21773aec-9c2f-46ed-8057-27ae422f2536" (UID: "21773aec-9c2f-46ed-8057-27ae422f2536"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:15.999115 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21773aec-9c2f-46ed-8057-27ae422f2536-kube-api-access-t9ssh" (OuterVolumeSpecName: "kube-api-access-t9ssh") pod "21773aec-9c2f-46ed-8057-27ae422f2536" (UID: "21773aec-9c2f-46ed-8057-27ae422f2536"). InnerVolumeSpecName "kube-api-access-t9ssh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.000648 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c458bb3d-b382-44a5-8bca-276644fa267b-kube-api-access-flcvj" (OuterVolumeSpecName: "kube-api-access-flcvj") pod "c458bb3d-b382-44a5-8bca-276644fa267b" (UID: "c458bb3d-b382-44a5-8bca-276644fa267b"). InnerVolumeSpecName "kube-api-access-flcvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.035142 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.093141 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21773aec-9c2f-46ed-8057-27ae422f2536-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.093172 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21773aec-9c2f-46ed-8057-27ae422f2536-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.093181 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c458bb3d-b382-44a5-8bca-276644fa267b-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.093190 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flcvj\" (UniqueName: \"kubernetes.io/projected/c458bb3d-b382-44a5-8bca-276644fa267b-kube-api-access-flcvj\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.093199 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9ssh\" (UniqueName: \"kubernetes.io/projected/21773aec-9c2f-46ed-8057-27ae422f2536-kube-api-access-t9ssh\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.195922 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7a575e00-cd12-498f-b8a4-0806737389d9","Type":"ContainerStarted","Data":"5f06368974061b373afa009965d61709f6d87a65f871e63d054e25a60e82240d"} Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.199271 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"9144f526-8060-4b3b-bf78-26babcd1d963","Type":"ContainerStarted","Data":"627f5f2d397efd367e847ce52c60ae3a3b5e87ac27ddb0e9dc40ebd653204421"} Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.200485 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"ba185c6c-eecc-45d1-adef-b3bd7fa84686","Type":"ContainerStarted","Data":"496e5f527e4909f566d27d625ef9b29de7474521fdcc8d6eb2266bcd3d3bc49e"} Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.201679 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a0b03b5-597a-4c59-9784-218e9f9442d1","Type":"ContainerStarted","Data":"b671d6781d07bcd115b41db6a649bf4b99e8d47136b8b6d0cbec800ce0d3067f"} Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.203180 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4567e561-0bd8-4368-8868-e2531d7bb8d3","Type":"ContainerStarted","Data":"02759ffa4d09e1e3ff9db9aac19b7a15a431e468cee675606f29c87454ecfec5"} Jan 26 15:54:16 crc kubenswrapper[4713]: E0126 15:54:16.203791 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2b491fcb180423632d30811515a439a7a7f41023c1cfe4780647f18969b85a1d\\\"\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="ba185c6c-eecc-45d1-adef-b3bd7fa84686" Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.204353 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4","Type":"ContainerStarted","Data":"a8789b756003864cdaee3061f6811d27a4becad5513a9266ac4068517c34f243"} Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.206433 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" event={"ID":"deeee241-0904-4385-b17a-b390dfc5b2d4","Type":"ContainerStarted","Data":"8255691812e131c104afb82e642fcc4755708017f56641f3adde48648413e22f"} Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.207436 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301","Type":"ContainerStarted","Data":"a8ea0ecc60aa5b1f36bebc3f4136638969d2264d8551026613385010841a7d1c"} Jan 26 15:54:16 crc kubenswrapper[4713]: E0126 15:54:16.208783 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2b491fcb180423632d30811515a439a7a7f41023c1cfe4780647f18969b85a1d\\\"\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="a45d2a2d-be1b-476e-8fbf-f9bdd5a97301" Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.210622 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9tvd" event={"ID":"518d38d7-b30e-4d67-a3d7-456e26fc9869","Type":"ContainerStarted","Data":"28a03df8903c45c0bdf2b14cc37850ea7275db48b886a67a026494b461ee7c59"} Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.216454 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cf79fdc1-80c7-4f65-98e0-b08803c07edc","Type":"ContainerStarted","Data":"97885dcc07a9e5dafb623d1492d0d1f7957b3408d1348cee6b6426830e9da21f"} Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.217533 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" event={"ID":"912dd8bd-b0f7-441d-82fe-547964030ae5","Type":"ContainerStarted","Data":"bb75e34905a66e72ca398568ac526de39acdaf219417503d53152e4ac8d1ccc3"} Jan 26 15:54:16 crc kubenswrapper[4713]: E0126 15:54:16.226469 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:74d61619b9420655da84bc9939e37f76040b437a70e9c96eeb3267f00dfe88ad\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" podUID="912dd8bd-b0f7-441d-82fe-547964030ae5" Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.236412 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" event={"ID":"a1acb746-e41c-4b08-aefb-1277d7e710c9","Type":"ContainerStarted","Data":"02d5faf208b54056f7a7ca093b4b82e5984b282b9e4047d565dde3a06a692bce"} Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.237920 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kng8c" event={"ID":"c458bb3d-b382-44a5-8bca-276644fa267b","Type":"ContainerDied","Data":"131ca8cc9e24dde574ed1d721f8121f6f4f61bea0c9894f081ff9e7bf38f5855"} Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.237997 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kng8c" Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.239525 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" event={"ID":"f47dae24-9ea7-4625-a367-43fd29037227","Type":"ContainerStarted","Data":"649687ba6ca0a58fc9a72a77bf66c018ec9fb9e84180cf7304114094a541627a"} Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.252217 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" event={"ID":"d4f06dea-6c6e-4c23-a3e0-c10144d7338c","Type":"ContainerStarted","Data":"fd0e3dfc8cbfa9f7c5f5a7b0dcc7e5faf3c91de2c74d9b07612fd2ed971811e7"} Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.253998 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6637e535-e95f-407f-a97d-11da8ad9629c","Type":"ContainerStarted","Data":"61a0be71b485f03505d35b7b8969a02f90dd655d93a3c56efd8db4c8073e061a"} Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.254803 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" event={"ID":"21773aec-9c2f-46ed-8057-27ae422f2536","Type":"ContainerDied","Data":"9a5171c7b7e426066a01026b7dab561dffc6d21b57e8ada491825b920696dbbf"} Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.254862 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z6sbp" Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.256827 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b","Type":"ContainerStarted","Data":"b7b785bf2b4a17e6fd2c000cf7b3646b755359108c27bfb1d28e92a3c864ecf6"} Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.469285 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rl7z9"] Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.529811 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kng8c"] Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.545879 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kng8c"] Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.604133 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z6sbp"] Jan 26 15:54:16 crc kubenswrapper[4713]: I0126 15:54:16.611439 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z6sbp"] Jan 26 15:54:16 crc kubenswrapper[4713]: W0126 15:54:16.685230 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd161dabd_5253_4929_998e_07f3d465a03d.slice/crio-284b515fe1f8d2dd5fe323f97131d2c035e12d9cd3f75fbc4862618799c12866 WatchSource:0}: Error finding container 284b515fe1f8d2dd5fe323f97131d2c035e12d9cd3f75fbc4862618799c12866: Status 404 returned error can't find the container with id 284b515fe1f8d2dd5fe323f97131d2c035e12d9cd3f75fbc4862618799c12866 Jan 26 15:54:17 crc kubenswrapper[4713]: I0126 15:54:17.267229 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rl7z9" event={"ID":"d161dabd-5253-4929-998e-07f3d465a03d","Type":"ContainerStarted","Data":"284b515fe1f8d2dd5fe323f97131d2c035e12d9cd3f75fbc4862618799c12866"} Jan 26 15:54:17 crc kubenswrapper[4713]: E0126 15:54:17.269877 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2b491fcb180423632d30811515a439a7a7f41023c1cfe4780647f18969b85a1d\\\"\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="a45d2a2d-be1b-476e-8fbf-f9bdd5a97301" Jan 26 15:54:17 crc kubenswrapper[4713]: E0126 15:54:17.270035 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:74d61619b9420655da84bc9939e37f76040b437a70e9c96eeb3267f00dfe88ad\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" podUID="912dd8bd-b0f7-441d-82fe-547964030ae5" Jan 26 15:54:17 crc kubenswrapper[4713]: E0126 15:54:17.274019 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2b491fcb180423632d30811515a439a7a7f41023c1cfe4780647f18969b85a1d\\\"\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="ba185c6c-eecc-45d1-adef-b3bd7fa84686" Jan 26 15:54:17 crc kubenswrapper[4713]: I0126 15:54:17.835803 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21773aec-9c2f-46ed-8057-27ae422f2536" path="/var/lib/kubelet/pods/21773aec-9c2f-46ed-8057-27ae422f2536/volumes" Jan 26 15:54:17 crc kubenswrapper[4713]: I0126 15:54:17.836271 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c458bb3d-b382-44a5-8bca-276644fa267b" path="/var/lib/kubelet/pods/c458bb3d-b382-44a5-8bca-276644fa267b/volumes" Jan 26 15:54:18 crc kubenswrapper[4713]: I0126 15:54:18.282970 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7a575e00-cd12-498f-b8a4-0806737389d9","Type":"ContainerStarted","Data":"644dfaca6ea3ca3209dd65a9c882b713b3434866352e57957bae0b279e83000f"} Jan 26 15:54:19 crc kubenswrapper[4713]: I0126 15:54:19.296428 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78543593-d6da-448f-adf7-e1ead58bfb5f","Type":"ContainerStarted","Data":"dc5a619459a84dbf47717cb24a2a9866189e214b6d1072f1143f1f9d5871eb73"} Jan 26 15:54:19 crc kubenswrapper[4713]: I0126 15:54:19.299675 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e","Type":"ContainerStarted","Data":"33d44d1f2334b1f26c2db6d68b4d8d5fb78057a86292e75aa59c0ea345cb4ffc"} Jan 26 15:54:26 crc kubenswrapper[4713]: I0126 15:54:26.370582 4713 generic.go:334] "Generic (PLEG): container finished" podID="a25c5d9b-6658-4b9a-8fe7-fb4b3714696e" containerID="33d44d1f2334b1f26c2db6d68b4d8d5fb78057a86292e75aa59c0ea345cb4ffc" exitCode=0 Jan 26 15:54:26 crc kubenswrapper[4713]: I0126 15:54:26.370899 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e","Type":"ContainerDied","Data":"33d44d1f2334b1f26c2db6d68b4d8d5fb78057a86292e75aa59c0ea345cb4ffc"} Jan 26 15:54:27 crc kubenswrapper[4713]: I0126 15:54:27.381384 4713 generic.go:334] "Generic (PLEG): container finished" podID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerID="dc5a619459a84dbf47717cb24a2a9866189e214b6d1072f1143f1f9d5871eb73" exitCode=0 Jan 26 15:54:27 crc kubenswrapper[4713]: I0126 15:54:27.381475 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78543593-d6da-448f-adf7-e1ead58bfb5f","Type":"ContainerDied","Data":"dc5a619459a84dbf47717cb24a2a9866189e214b6d1072f1143f1f9d5871eb73"} Jan 26 15:54:32 crc kubenswrapper[4713]: E0126 15:54:32.155944 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Jan 26 15:54:32 crc kubenswrapper[4713]: E0126 15:54:32.157040 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f4hcdh564h5d6h588h546h5f5h649h99h6fh598h5bh99h567h74h9bh58dh54dh695h596h64hfbhd9hch55dhf4h69h5fch67hc7h7h64dq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7krgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(4567e561-0bd8-4368-8868-e2531d7bb8d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:54:32 crc kubenswrapper[4713]: E0126 15:54:32.747606 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Jan 26 15:54:32 crc kubenswrapper[4713]: E0126 15:54:32.747839 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n545h665h596h7dh665h98h575h59dh5cbh7h5bdh664h5cbh55fh79h9fhcch556hcfh7bhc8h576hddh5d9h67dh595h59dh86h5dh574h67ch689q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fjhsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-c9tvd_openstack(518d38d7-b30e-4d67-a3d7-456e26fc9869): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:54:32 crc kubenswrapper[4713]: E0126 15:54:32.749909 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-c9tvd" podUID="518d38d7-b30e-4d67-a3d7-456e26fc9869" Jan 26 15:54:33 crc kubenswrapper[4713]: E0126 15:54:33.267728 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:74d61619b9420655da84bc9939e37f76040b437a70e9c96eeb3267f00dfe88ad" Jan 26 15:54:33 crc kubenswrapper[4713]: E0126 15:54:33.268207 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:gateway,Image:registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:74d61619b9420655da84bc9939e37f76040b437a70e9c96eeb3267f00dfe88ad,Command:[],Args:[--debug.name=lokistack-gateway --web.listen=0.0.0.0:8080 --web.internal.listen=0.0.0.0:8081 --web.healthchecks.url=https://localhost:8080 --log.level=warn --logs.read.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.tail.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.write.endpoint=https://cloudkitty-lokistack-distributor-http.openstack.svc.cluster.local:3100 --logs.write-timeout=4m0s --rbac.config=/etc/lokistack-gateway/rbac.yaml --tenants.config=/etc/lokistack-gateway/tenants.yaml --server.read-timeout=48s --server.write-timeout=6m0s --tls.min-version=VersionTLS12 --tls.server.cert-file=/var/run/tls/http/server/tls.crt --tls.server.key-file=/var/run/tls/http/server/tls.key --tls.healthchecks.server-ca-file=/var/run/ca/server/service-ca.crt --tls.healthchecks.server-name=cloudkitty-lokistack-gateway-http.openstack.svc.cluster.local --tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt --tls.internal.server.key-file=/var/run/tls/http/server/tls.key --tls.min-version=VersionTLS12 --tls.cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logs.tls.ca-file=/var/run/ca/upstream/service-ca.crt --logs.tls.cert-file=/var/run/tls/http/upstream/tls.crt --logs.tls.key-file=/var/run/tls/http/upstream/tls.key --tls.client-auth-type=RequestClientCert],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},ContainerPort{Name:public,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rbac,ReadOnly:true,MountPath:/etc/lokistack-gateway/rbac.yaml,SubPath:rbac.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tenants,ReadOnly:true,MountPath:/etc/lokistack-gateway/tenants.yaml,SubPath:tenants.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lokistack-gateway,ReadOnly:true,MountPath:/etc/lokistack-gateway/lokistack-gateway.rego,SubPath:lokistack-gateway.rego,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-secret,ReadOnly:true,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-client-http,ReadOnly:true,MountPath:/var/run/tls/http/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-ca-bundle,ReadOnly:false,MountPath:/var/run/tenants-ca/cloudkitty,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67zqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/live,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx_openstack(d4f06dea-6c6e-4c23-a3e0-c10144d7338c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:54:33 crc kubenswrapper[4713]: E0126 15:54:33.269576 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" podUID="d4f06dea-6c6e-4c23-a3e0-c10144d7338c" Jan 26 15:54:33 crc kubenswrapper[4713]: E0126 15:54:33.436983 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:74d61619b9420655da84bc9939e37f76040b437a70e9c96eeb3267f00dfe88ad\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" podUID="d4f06dea-6c6e-4c23-a3e0-c10144d7338c" Jan 26 15:54:33 crc kubenswrapper[4713]: E0126 15:54:33.437056 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-c9tvd" podUID="518d38d7-b30e-4d67-a3d7-456e26fc9869" Jan 26 15:54:33 crc kubenswrapper[4713]: E0126 15:54:33.477549 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" Jan 26 15:54:33 crc kubenswrapper[4713]: E0126 15:54:33.477783 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n598h548h5b7h564h64h5dbh696h595h94h5dfh5fch99h5f6h546hb8h659h574hfbh594h589h5d7hch8fh566h7fh9bhdch5dch5c9h55dh5cbh579q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h55rw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(4a0b03b5-597a-4c59-9784-218e9f9442d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:54:35 crc kubenswrapper[4713]: E0126 15:54:35.300930 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 15:54:35 crc kubenswrapper[4713]: E0126 15:54:35.301607 4713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 15:54:35 crc kubenswrapper[4713]: E0126 15:54:35.301750 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gx4pj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:54:35 crc kubenswrapper[4713]: E0126 15:54:35.303010 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4" Jan 26 15:54:35 crc kubenswrapper[4713]: I0126 15:54:35.449289 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6637e535-e95f-407f-a97d-11da8ad9629c","Type":"ContainerStarted","Data":"edf175931ec239e115037777728e7a125496dd7dd4172c8920a288a46383dc5a"} Jan 26 15:54:35 crc kubenswrapper[4713]: I0126 15:54:35.449643 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 26 15:54:35 crc kubenswrapper[4713]: E0126 15:54:35.450598 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4" Jan 26 15:54:35 crc kubenswrapper[4713]: I0126 15:54:35.469615 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=25.652391876 podStartE2EDuration="43.469597699s" podCreationTimestamp="2026-01-26 15:53:52 +0000 UTC" firstStartedPulling="2026-01-26 15:54:15.695662536 +0000 UTC m=+1230.832679771" lastFinishedPulling="2026-01-26 15:54:33.512868359 +0000 UTC m=+1248.649885594" observedRunningTime="2026-01-26 15:54:35.465980669 +0000 UTC m=+1250.602997904" watchObservedRunningTime="2026-01-26 15:54:35.469597699 +0000 UTC m=+1250.606614934" Jan 26 15:54:42 crc kubenswrapper[4713]: I0126 15:54:42.979879 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 26 15:54:44 crc kubenswrapper[4713]: I0126 15:54:44.342528 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rc2tb"] Jan 26 15:54:44 crc kubenswrapper[4713]: I0126 15:54:44.372636 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-q9fhm"] Jan 26 15:54:44 crc kubenswrapper[4713]: I0126 15:54:44.384217 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-q9fhm"] Jan 26 15:54:44 crc kubenswrapper[4713]: I0126 15:54:44.384318 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" Jan 26 15:54:44 crc kubenswrapper[4713]: I0126 15:54:44.536583 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cec4b59-6054-4f9b-998f-35059f5a12d6-config\") pod \"dnsmasq-dns-7cb5889db5-q9fhm\" (UID: \"7cec4b59-6054-4f9b-998f-35059f5a12d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" Jan 26 15:54:44 crc kubenswrapper[4713]: I0126 15:54:44.536730 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cec4b59-6054-4f9b-998f-35059f5a12d6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-q9fhm\" (UID: \"7cec4b59-6054-4f9b-998f-35059f5a12d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" Jan 26 15:54:44 crc kubenswrapper[4713]: I0126 15:54:44.536769 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4lkb\" (UniqueName: \"kubernetes.io/projected/7cec4b59-6054-4f9b-998f-35059f5a12d6-kube-api-access-n4lkb\") pod \"dnsmasq-dns-7cb5889db5-q9fhm\" (UID: \"7cec4b59-6054-4f9b-998f-35059f5a12d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" Jan 26 15:54:44 crc kubenswrapper[4713]: I0126 15:54:44.638483 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cec4b59-6054-4f9b-998f-35059f5a12d6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-q9fhm\" (UID: \"7cec4b59-6054-4f9b-998f-35059f5a12d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" Jan 26 15:54:44 crc kubenswrapper[4713]: I0126 15:54:44.638536 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4lkb\" (UniqueName: \"kubernetes.io/projected/7cec4b59-6054-4f9b-998f-35059f5a12d6-kube-api-access-n4lkb\") pod \"dnsmasq-dns-7cb5889db5-q9fhm\" (UID: \"7cec4b59-6054-4f9b-998f-35059f5a12d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" Jan 26 15:54:44 crc kubenswrapper[4713]: I0126 15:54:44.638655 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cec4b59-6054-4f9b-998f-35059f5a12d6-config\") pod \"dnsmasq-dns-7cb5889db5-q9fhm\" (UID: \"7cec4b59-6054-4f9b-998f-35059f5a12d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" Jan 26 15:54:44 crc kubenswrapper[4713]: I0126 15:54:44.639684 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cec4b59-6054-4f9b-998f-35059f5a12d6-config\") pod \"dnsmasq-dns-7cb5889db5-q9fhm\" (UID: \"7cec4b59-6054-4f9b-998f-35059f5a12d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" Jan 26 15:54:44 crc kubenswrapper[4713]: I0126 15:54:44.640298 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cec4b59-6054-4f9b-998f-35059f5a12d6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-q9fhm\" (UID: \"7cec4b59-6054-4f9b-998f-35059f5a12d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" Jan 26 15:54:44 crc kubenswrapper[4713]: I0126 15:54:44.669530 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4lkb\" (UniqueName: \"kubernetes.io/projected/7cec4b59-6054-4f9b-998f-35059f5a12d6-kube-api-access-n4lkb\") pod \"dnsmasq-dns-7cb5889db5-q9fhm\" (UID: \"7cec4b59-6054-4f9b-998f-35059f5a12d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" Jan 26 15:54:44 crc kubenswrapper[4713]: I0126 15:54:44.714133 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.566686 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.576791 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.584523 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-lth6x" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.584646 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.584706 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.584984 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.589457 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.657328 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d0432b2d-538e-4b04-899b-6fe666f340de-cache\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.657392 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m2tg\" (UniqueName: \"kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-kube-api-access-7m2tg\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.657634 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5b17cee3-6a50-4f7c-aba8-ff0397aa16e4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b17cee3-6a50-4f7c-aba8-ff0397aa16e4\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.657770 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d0432b2d-538e-4b04-899b-6fe666f340de-lock\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.657830 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.657953 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0432b2d-538e-4b04-899b-6fe666f340de-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.760128 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d0432b2d-538e-4b04-899b-6fe666f340de-cache\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.760427 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m2tg\" (UniqueName: \"kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-kube-api-access-7m2tg\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.760487 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5b17cee3-6a50-4f7c-aba8-ff0397aa16e4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b17cee3-6a50-4f7c-aba8-ff0397aa16e4\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.760531 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d0432b2d-538e-4b04-899b-6fe666f340de-lock\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.760568 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.760626 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0432b2d-538e-4b04-899b-6fe666f340de-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.761120 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d0432b2d-538e-4b04-899b-6fe666f340de-lock\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.761202 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d0432b2d-538e-4b04-899b-6fe666f340de-cache\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.762539 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.763594 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.763637 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5b17cee3-6a50-4f7c-aba8-ff0397aa16e4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b17cee3-6a50-4f7c-aba8-ff0397aa16e4\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/feeecedd58642e83048263ebdca8a9f22781e17e2a0d6d5a9bd7a91571447a29/globalmount\"" pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.767142 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0432b2d-538e-4b04-899b-6fe666f340de-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.772931 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.779593 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m2tg\" (UniqueName: \"kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-kube-api-access-7m2tg\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:45 crc kubenswrapper[4713]: E0126 15:54:45.784222 4713 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 15:54:45 crc kubenswrapper[4713]: E0126 15:54:45.784257 4713 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 15:54:45 crc kubenswrapper[4713]: E0126 15:54:45.784330 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift podName:d0432b2d-538e-4b04-899b-6fe666f340de nodeName:}" failed. No retries permitted until 2026-01-26 15:54:46.284304812 +0000 UTC m=+1261.421322047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift") pod "swift-storage-0" (UID: "d0432b2d-538e-4b04-899b-6fe666f340de") : configmap "swift-ring-files" not found Jan 26 15:54:45 crc kubenswrapper[4713]: I0126 15:54:45.802197 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5b17cee3-6a50-4f7c-aba8-ff0397aa16e4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b17cee3-6a50-4f7c-aba8-ff0397aa16e4\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.150355 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-7x6k4"] Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.152254 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.155414 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.155703 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.156077 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.166964 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7x6k4"] Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.176777 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/125bdff8-6eff-4f59-9cc4-c986c5771aa0-scripts\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.176917 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zfrk\" (UniqueName: \"kubernetes.io/projected/125bdff8-6eff-4f59-9cc4-c986c5771aa0-kube-api-access-8zfrk\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.176957 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-dispersionconf\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.177078 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-swiftconf\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.177116 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/125bdff8-6eff-4f59-9cc4-c986c5771aa0-ring-data-devices\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.177197 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-combined-ca-bundle\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.177249 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/125bdff8-6eff-4f59-9cc4-c986c5771aa0-etc-swift\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.278975 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/125bdff8-6eff-4f59-9cc4-c986c5771aa0-scripts\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.279041 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zfrk\" (UniqueName: \"kubernetes.io/projected/125bdff8-6eff-4f59-9cc4-c986c5771aa0-kube-api-access-8zfrk\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.279064 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-dispersionconf\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.279110 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-swiftconf\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.279132 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/125bdff8-6eff-4f59-9cc4-c986c5771aa0-ring-data-devices\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.279166 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-combined-ca-bundle\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.279193 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/125bdff8-6eff-4f59-9cc4-c986c5771aa0-etc-swift\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.280347 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/125bdff8-6eff-4f59-9cc4-c986c5771aa0-etc-swift\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.281041 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/125bdff8-6eff-4f59-9cc4-c986c5771aa0-scripts\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.281577 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/125bdff8-6eff-4f59-9cc4-c986c5771aa0-ring-data-devices\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.284902 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-dispersionconf\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.286455 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-swiftconf\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.286893 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-combined-ca-bundle\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.307139 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zfrk\" (UniqueName: \"kubernetes.io/projected/125bdff8-6eff-4f59-9cc4-c986c5771aa0-kube-api-access-8zfrk\") pod \"swift-ring-rebalance-7x6k4\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.380877 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:46 crc kubenswrapper[4713]: E0126 15:54:46.381076 4713 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 15:54:46 crc kubenswrapper[4713]: E0126 15:54:46.381095 4713 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 15:54:46 crc kubenswrapper[4713]: E0126 15:54:46.381146 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift podName:d0432b2d-538e-4b04-899b-6fe666f340de nodeName:}" failed. No retries permitted until 2026-01-26 15:54:47.381128491 +0000 UTC m=+1262.518145726 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift") pod "swift-storage-0" (UID: "d0432b2d-538e-4b04-899b-6fe666f340de") : configmap "swift-ring-files" not found Jan 26 15:54:46 crc kubenswrapper[4713]: E0126 15:54:46.401647 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 26 15:54:46 crc kubenswrapper[4713]: E0126 15:54:46.401916 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n598h548h5b7h564h64h5dbh696h595h94h5dfh5fch99h5f6h546hb8h659h574hfbh594h589h5d7hch8fh566h7fh9bhdch5dch5c9h55dh5cbh579q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h55rw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(4a0b03b5-597a-4c59-9784-218e9f9442d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:54:46 crc kubenswrapper[4713]: E0126 15:54:46.403165 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ovsdbserver-sb-0" podUID="4a0b03b5-597a-4c59-9784-218e9f9442d1" Jan 26 15:54:46 crc kubenswrapper[4713]: E0126 15:54:46.423233 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 26 15:54:46 crc kubenswrapper[4713]: E0126 15:54:46.423546 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n5f4hcdh564h5d6h588h546h5f5h649h99h6fh598h5bh99h567h74h9bh58dh54dh695h596h64hfbhd9hch55dhf4h69h5fch67hc7h7h64dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7krgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(4567e561-0bd8-4368-8868-e2531d7bb8d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:54:46 crc kubenswrapper[4713]: E0126 15:54:46.426859 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ovsdbserver-nb-0" podUID="4567e561-0bd8-4368-8868-e2531d7bb8d3" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.541291 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-lth6x" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.548224 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.549999 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" event={"ID":"deeee241-0904-4385-b17a-b390dfc5b2d4","Type":"ContainerStarted","Data":"9048908c87640b53fba10f9e2f6f0cc6aad944678f4c5666fec6cdd621edaa98"} Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.550188 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.553056 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" event={"ID":"a1acb746-e41c-4b08-aefb-1277d7e710c9","Type":"ContainerStarted","Data":"b72c0feba5807b0d71130d5d6c9440ae1b933620db2eb0d2e37c37bb5e67a9b1"} Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.553243 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.556739 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rl7z9" event={"ID":"d161dabd-5253-4929-998e-07f3d465a03d","Type":"ContainerStarted","Data":"1f4d1704d4bd6732588dc9d742b8fcae666305948c76e2db61afdb1f466a53c1"} Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.578808 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" podStartSLOduration=25.074858415 podStartE2EDuration="43.578791088s" podCreationTimestamp="2026-01-26 15:54:03 +0000 UTC" firstStartedPulling="2026-01-26 15:54:15.617492012 +0000 UTC m=+1230.754509247" lastFinishedPulling="2026-01-26 15:54:34.121424675 +0000 UTC m=+1249.258441920" observedRunningTime="2026-01-26 15:54:46.576730061 +0000 UTC m=+1261.713747306" watchObservedRunningTime="2026-01-26 15:54:46.578791088 +0000 UTC m=+1261.715808323" Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.592667 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cf79fdc1-80c7-4f65-98e0-b08803c07edc","Type":"ContainerStarted","Data":"7f328573650bd053af237a0dd764541fd3fa1b8e205b7a80afb22ad1db113955"} Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.609160 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b","Type":"ContainerStarted","Data":"9a3ebabe0fc541c90a9074d515c6d21a9d804ae534ff3aa00574e3738a345dfa"} Jan 26 15:54:46 crc kubenswrapper[4713]: I0126 15:54:46.671825 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" podStartSLOduration=25.347505579 podStartE2EDuration="43.671788875s" podCreationTimestamp="2026-01-26 15:54:03 +0000 UTC" firstStartedPulling="2026-01-26 15:54:15.798136056 +0000 UTC m=+1230.935153291" lastFinishedPulling="2026-01-26 15:54:34.122419352 +0000 UTC m=+1249.259436587" observedRunningTime="2026-01-26 15:54:46.650528483 +0000 UTC m=+1261.787545739" watchObservedRunningTime="2026-01-26 15:54:46.671788875 +0000 UTC m=+1261.808806110" Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.304233 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-q9fhm"] Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.427611 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:47 crc kubenswrapper[4713]: E0126 15:54:47.428009 4713 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 15:54:47 crc kubenswrapper[4713]: E0126 15:54:47.428337 4713 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 15:54:47 crc kubenswrapper[4713]: E0126 15:54:47.428430 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift podName:d0432b2d-538e-4b04-899b-6fe666f340de nodeName:}" failed. No retries permitted until 2026-01-26 15:54:49.428410878 +0000 UTC m=+1264.565428113 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift") pod "swift-storage-0" (UID: "d0432b2d-538e-4b04-899b-6fe666f340de") : configmap "swift-ring-files" not found Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.535720 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7x6k4"] Jan 26 15:54:47 crc kubenswrapper[4713]: W0126 15:54:47.539680 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod125bdff8_6eff_4f59_9cc4_c986c5771aa0.slice/crio-f6fc27f04c42be5a51d2687a6e65b04cff7354379a9815ac50511c42d89f4449 WatchSource:0}: Error finding container f6fc27f04c42be5a51d2687a6e65b04cff7354379a9815ac50511c42d89f4449: Status 404 returned error can't find the container with id f6fc27f04c42be5a51d2687a6e65b04cff7354379a9815ac50511c42d89f4449 Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.618786 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"a45d2a2d-be1b-476e-8fbf-f9bdd5a97301","Type":"ContainerStarted","Data":"d8075dc10aba7a8fe160f0e9aa380897912673f8b660f89ba5fc5178be59b709"} Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.619297 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.620574 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e","Type":"ContainerStarted","Data":"d4f83f72fa722813fd35c01179909f366c440a02478b80215eeeb4929a1ec10d"} Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.623292 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"9144f526-8060-4b3b-bf78-26babcd1d963","Type":"ContainerStarted","Data":"0e0200953c5109db9c611257293e363dc66f5c7013c4b0446cd4d041a7ab6669"} Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.623391 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.624929 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" event={"ID":"7cec4b59-6054-4f9b-998f-35059f5a12d6","Type":"ContainerStarted","Data":"325d909c4bf0dd4d0950e861b575a022ac6f1f0e27ea9ff0a8b3ac87c62c4f9b"} Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.629145 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" event={"ID":"912dd8bd-b0f7-441d-82fe-547964030ae5","Type":"ContainerStarted","Data":"f66329c4486bd1ea0086685cdf5867a19ea0c8dd420fe3b88b3dfd6a202f34bf"} Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.629460 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.631437 4713 generic.go:334] "Generic (PLEG): container finished" podID="100b22db-ec0d-40f0-975e-c86349b1890a" containerID="d9bc4cf0deeff3133fa6a3db72d690d889d4e333a291b97dc393485761a1f512" exitCode=0 Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.631501 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"100b22db-ec0d-40f0-975e-c86349b1890a","Type":"ContainerDied","Data":"d9bc4cf0deeff3133fa6a3db72d690d889d4e333a291b97dc393485761a1f512"} Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.634024 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" event={"ID":"f47dae24-9ea7-4625-a367-43fd29037227","Type":"ContainerStarted","Data":"6014771e3b51e9578310d492dcda28403ae5cbacfe88d7ba6dc4a30fec81d7f6"} Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.634706 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.640554 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"ba185c6c-eecc-45d1-adef-b3bd7fa84686","Type":"ContainerStarted","Data":"07165908097db0d35c9bcf547d90c33528c9d9aad5bb1d2daafdb30d1ca17665"} Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.641379 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.644261 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7x6k4" event={"ID":"125bdff8-6eff-4f59-9cc4-c986c5771aa0","Type":"ContainerStarted","Data":"f6fc27f04c42be5a51d2687a6e65b04cff7354379a9815ac50511c42d89f4449"} Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.645808 4713 generic.go:334] "Generic (PLEG): container finished" podID="d161dabd-5253-4929-998e-07f3d465a03d" containerID="1f4d1704d4bd6732588dc9d742b8fcae666305948c76e2db61afdb1f466a53c1" exitCode=0 Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.647164 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rl7z9" event={"ID":"d161dabd-5253-4929-998e-07f3d465a03d","Type":"ContainerDied","Data":"1f4d1704d4bd6732588dc9d742b8fcae666305948c76e2db61afdb1f466a53c1"} Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.650459 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=25.693749729 podStartE2EDuration="44.650440783s" podCreationTimestamp="2026-01-26 15:54:03 +0000 UTC" firstStartedPulling="2026-01-26 15:54:15.82307565 +0000 UTC m=+1230.960092885" lastFinishedPulling="2026-01-26 15:54:34.779766704 +0000 UTC m=+1249.916783939" observedRunningTime="2026-01-26 15:54:47.638249504 +0000 UTC m=+1262.775266729" watchObservedRunningTime="2026-01-26 15:54:47.650440783 +0000 UTC m=+1262.787458018" Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.655245 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" podStartSLOduration=25.818519528 podStartE2EDuration="44.655227566s" podCreationTimestamp="2026-01-26 15:54:03 +0000 UTC" firstStartedPulling="2026-01-26 15:54:15.612000149 +0000 UTC m=+1230.749017384" lastFinishedPulling="2026-01-26 15:54:34.448708187 +0000 UTC m=+1249.585725422" observedRunningTime="2026-01-26 15:54:47.65462147 +0000 UTC m=+1262.791638705" watchObservedRunningTime="2026-01-26 15:54:47.655227566 +0000 UTC m=+1262.792244801" Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.661897 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.685068 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-rstdb" podStartSLOduration=25.798607936 podStartE2EDuration="44.685049236s" podCreationTimestamp="2026-01-26 15:54:03 +0000 UTC" firstStartedPulling="2026-01-26 15:54:15.76053468 +0000 UTC m=+1230.897551915" lastFinishedPulling="2026-01-26 15:54:34.64697598 +0000 UTC m=+1249.783993215" observedRunningTime="2026-01-26 15:54:47.676428196 +0000 UTC m=+1262.813445431" watchObservedRunningTime="2026-01-26 15:54:47.685049236 +0000 UTC m=+1262.822066501" Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.722798 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=26.186538384 podStartE2EDuration="44.722777065s" podCreationTimestamp="2026-01-26 15:54:03 +0000 UTC" firstStartedPulling="2026-01-26 15:54:15.58577499 +0000 UTC m=+1230.722792225" lastFinishedPulling="2026-01-26 15:54:34.122013671 +0000 UTC m=+1249.259030906" observedRunningTime="2026-01-26 15:54:47.714782093 +0000 UTC m=+1262.851799328" watchObservedRunningTime="2026-01-26 15:54:47.722777065 +0000 UTC m=+1262.859794300" Jan 26 15:54:47 crc kubenswrapper[4713]: I0126 15:54:47.739659 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=25.76100602 podStartE2EDuration="44.739640974s" podCreationTimestamp="2026-01-26 15:54:03 +0000 UTC" firstStartedPulling="2026-01-26 15:54:15.798137356 +0000 UTC m=+1230.935154591" lastFinishedPulling="2026-01-26 15:54:34.77677231 +0000 UTC m=+1249.913789545" observedRunningTime="2026-01-26 15:54:47.730568432 +0000 UTC m=+1262.867585667" watchObservedRunningTime="2026-01-26 15:54:47.739640974 +0000 UTC m=+1262.876658209" Jan 26 15:54:48 crc kubenswrapper[4713]: E0126 15:54:48.023072 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="4567e561-0bd8-4368-8868-e2531d7bb8d3" Jan 26 15:54:48 crc kubenswrapper[4713]: I0126 15:54:48.654659 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9tvd" event={"ID":"518d38d7-b30e-4d67-a3d7-456e26fc9869","Type":"ContainerStarted","Data":"77ffa77e48488f28a212699a6ea697a43fb80bc86bd481a59a8611d773d08820"} Jan 26 15:54:48 crc kubenswrapper[4713]: I0126 15:54:48.655075 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-c9tvd" Jan 26 15:54:48 crc kubenswrapper[4713]: I0126 15:54:48.658770 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78543593-d6da-448f-adf7-e1ead58bfb5f","Type":"ContainerStarted","Data":"216ea4814cf70f73837082e6ba6706de7ae7d6c3f28865b7f62196a1d7825419"} Jan 26 15:54:48 crc kubenswrapper[4713]: I0126 15:54:48.660433 4713 generic.go:334] "Generic (PLEG): container finished" podID="2a65ef24-0e05-415a-b7b7-6b44012b6c66" containerID="cff6c54368423bd421ff981a3b7a334fe46efeed47e1d4ff02e6ad9719e7a045" exitCode=0 Jan 26 15:54:48 crc kubenswrapper[4713]: I0126 15:54:48.660488 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jq69z" event={"ID":"2a65ef24-0e05-415a-b7b7-6b44012b6c66","Type":"ContainerDied","Data":"cff6c54368423bd421ff981a3b7a334fe46efeed47e1d4ff02e6ad9719e7a045"} Jan 26 15:54:48 crc kubenswrapper[4713]: I0126 15:54:48.666135 4713 generic.go:334] "Generic (PLEG): container finished" podID="446e46b1-a8cc-40fc-8947-d49fd0241bdd" containerID="0b0c722c8a5cb578831efe9f848a1bef1b66f9d350e22a03f840b73e69942490" exitCode=0 Jan 26 15:54:48 crc kubenswrapper[4713]: I0126 15:54:48.666390 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" event={"ID":"446e46b1-a8cc-40fc-8947-d49fd0241bdd","Type":"ContainerDied","Data":"0b0c722c8a5cb578831efe9f848a1bef1b66f9d350e22a03f840b73e69942490"} Jan 26 15:54:48 crc kubenswrapper[4713]: I0126 15:54:48.669933 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4567e561-0bd8-4368-8868-e2531d7bb8d3","Type":"ContainerStarted","Data":"61fac13205b282deb0d8a48882c4817ba5874e6b0f726fb64ecad93da3231ecc"} Jan 26 15:54:48 crc kubenswrapper[4713]: E0126 15:54:48.676055 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="4567e561-0bd8-4368-8868-e2531d7bb8d3" Jan 26 15:54:48 crc kubenswrapper[4713]: I0126 15:54:48.678278 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-c9tvd" podStartSLOduration=20.578559734 podStartE2EDuration="51.678257409s" podCreationTimestamp="2026-01-26 15:53:57 +0000 UTC" firstStartedPulling="2026-01-26 15:54:15.675842995 +0000 UTC m=+1230.812860230" lastFinishedPulling="2026-01-26 15:54:46.77554067 +0000 UTC m=+1261.912557905" observedRunningTime="2026-01-26 15:54:48.676351176 +0000 UTC m=+1263.813368421" watchObservedRunningTime="2026-01-26 15:54:48.678257409 +0000 UTC m=+1263.815274644" Jan 26 15:54:49 crc kubenswrapper[4713]: E0126 15:54:49.337576 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="4a0b03b5-597a-4c59-9784-218e9f9442d1" Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.507265 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:49 crc kubenswrapper[4713]: E0126 15:54:49.507562 4713 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 15:54:49 crc kubenswrapper[4713]: E0126 15:54:49.507688 4713 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 15:54:49 crc kubenswrapper[4713]: E0126 15:54:49.507759 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift podName:d0432b2d-538e-4b04-899b-6fe666f340de nodeName:}" failed. No retries permitted until 2026-01-26 15:54:53.507736718 +0000 UTC m=+1268.644753953 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift") pod "swift-storage-0" (UID: "d0432b2d-538e-4b04-899b-6fe666f340de") : configmap "swift-ring-files" not found Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.524033 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.664102 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.691197 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.692127 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rc2tb" event={"ID":"446e46b1-a8cc-40fc-8947-d49fd0241bdd","Type":"ContainerDied","Data":"7220dd128d886e9688b5175122995a8880695f0636b7a8d1a0e6512849c77046"} Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.692167 4713 scope.go:117] "RemoveContainer" containerID="0b0c722c8a5cb578831efe9f848a1bef1b66f9d350e22a03f840b73e69942490" Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.697915 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"100b22db-ec0d-40f0-975e-c86349b1890a","Type":"ContainerStarted","Data":"8861de966625da7bdf9264299d3fe0658ef71f18499ba0c15b3610896a203453"} Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.698147 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.701324 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rl7z9" event={"ID":"d161dabd-5253-4929-998e-07f3d465a03d","Type":"ContainerStarted","Data":"641b311d8770f709fa785432388e12f6abd2951072013c3f1ef32d87bd9b053b"} Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.704714 4713 generic.go:334] "Generic (PLEG): container finished" podID="7cec4b59-6054-4f9b-998f-35059f5a12d6" containerID="6ed5a70e1bffd7c036cec773bfa3b7318ac55a83e8e997843decb29ead7e36e6" exitCode=0 Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.704781 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" event={"ID":"7cec4b59-6054-4f9b-998f-35059f5a12d6","Type":"ContainerDied","Data":"6ed5a70e1bffd7c036cec773bfa3b7318ac55a83e8e997843decb29ead7e36e6"} Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.711546 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hg7p\" (UniqueName: \"kubernetes.io/projected/446e46b1-a8cc-40fc-8947-d49fd0241bdd-kube-api-access-8hg7p\") pod \"446e46b1-a8cc-40fc-8947-d49fd0241bdd\" (UID: \"446e46b1-a8cc-40fc-8947-d49fd0241bdd\") " Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.711872 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/446e46b1-a8cc-40fc-8947-d49fd0241bdd-dns-svc\") pod \"446e46b1-a8cc-40fc-8947-d49fd0241bdd\" (UID: \"446e46b1-a8cc-40fc-8947-d49fd0241bdd\") " Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.711944 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446e46b1-a8cc-40fc-8947-d49fd0241bdd-config\") pod \"446e46b1-a8cc-40fc-8947-d49fd0241bdd\" (UID: \"446e46b1-a8cc-40fc-8947-d49fd0241bdd\") " Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.717211 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4","Type":"ContainerStarted","Data":"553d5425d14835985d42398b190141105d49254c8fb23e9ee1b8895389bc82be"} Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.717675 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.719995 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a0b03b5-597a-4c59-9784-218e9f9442d1","Type":"ContainerStarted","Data":"0e259cbc5cd83b796c4e4247418efa5deeb5e865f6eea59f646e9221d175dce9"} Jan 26 15:54:49 crc kubenswrapper[4713]: E0126 15:54:49.727146 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="4567e561-0bd8-4368-8868-e2531d7bb8d3" Jan 26 15:54:49 crc kubenswrapper[4713]: E0126 15:54:49.734825 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="4a0b03b5-597a-4c59-9784-218e9f9442d1" Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.738598 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=46.548839568 podStartE2EDuration="1m1.738573588s" podCreationTimestamp="2026-01-26 15:53:48 +0000 UTC" firstStartedPulling="2026-01-26 15:53:57.992617062 +0000 UTC m=+1213.129634297" lastFinishedPulling="2026-01-26 15:54:13.182351082 +0000 UTC m=+1228.319368317" observedRunningTime="2026-01-26 15:54:49.723294833 +0000 UTC m=+1264.860312068" watchObservedRunningTime="2026-01-26 15:54:49.738573588 +0000 UTC m=+1264.875590833" Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.767218 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=23.164425915 podStartE2EDuration="55.767199354s" podCreationTimestamp="2026-01-26 15:53:54 +0000 UTC" firstStartedPulling="2026-01-26 15:54:15.617455211 +0000 UTC m=+1230.754472446" lastFinishedPulling="2026-01-26 15:54:48.22022865 +0000 UTC m=+1263.357245885" observedRunningTime="2026-01-26 15:54:49.761564917 +0000 UTC m=+1264.898582152" watchObservedRunningTime="2026-01-26 15:54:49.767199354 +0000 UTC m=+1264.904216589" Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.809625 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446e46b1-a8cc-40fc-8947-d49fd0241bdd-kube-api-access-8hg7p" (OuterVolumeSpecName: "kube-api-access-8hg7p") pod "446e46b1-a8cc-40fc-8947-d49fd0241bdd" (UID: "446e46b1-a8cc-40fc-8947-d49fd0241bdd"). InnerVolumeSpecName "kube-api-access-8hg7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:54:49 crc kubenswrapper[4713]: I0126 15:54:49.816444 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hg7p\" (UniqueName: \"kubernetes.io/projected/446e46b1-a8cc-40fc-8947-d49fd0241bdd-kube-api-access-8hg7p\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.523455 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.532417 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446e46b1-a8cc-40fc-8947-d49fd0241bdd-config" (OuterVolumeSpecName: "config") pod "446e46b1-a8cc-40fc-8947-d49fd0241bdd" (UID: "446e46b1-a8cc-40fc-8947-d49fd0241bdd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.543417 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446e46b1-a8cc-40fc-8947-d49fd0241bdd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "446e46b1-a8cc-40fc-8947-d49fd0241bdd" (UID: "446e46b1-a8cc-40fc-8947-d49fd0241bdd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.632601 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/446e46b1-a8cc-40fc-8947-d49fd0241bdd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.632644 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446e46b1-a8cc-40fc-8947-d49fd0241bdd-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.731639 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"a25c5d9b-6658-4b9a-8fe7-fb4b3714696e","Type":"ContainerStarted","Data":"0fad6045424b06e875103a2dec64d864f6bc9180087e04ad094d9b9ccd9b5885"} Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.732082 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.735463 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" event={"ID":"d4f06dea-6c6e-4c23-a3e0-c10144d7338c","Type":"ContainerStarted","Data":"4fc7d736bb0b9b9ce63cfc67594b5ae3e210bf7ad8e43f48990a2a763ec619a7"} Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.735759 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.736038 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rc2tb"] Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.737054 4713 generic.go:334] "Generic (PLEG): container finished" podID="5bba60c2-25f6-41a7-a231-51fc5a6a9d3b" containerID="9a3ebabe0fc541c90a9074d515c6d21a9d804ae534ff3aa00574e3738a345dfa" exitCode=0 Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.737131 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b","Type":"ContainerDied","Data":"9a3ebabe0fc541c90a9074d515c6d21a9d804ae534ff3aa00574e3738a345dfa"} Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.737215 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" podUID="d4f06dea-6c6e-4c23-a3e0-c10144d7338c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.121:8081/ready\": dial tcp 10.217.0.121:8081: connect: connection refused" Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.737493 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.740316 4713 generic.go:334] "Generic (PLEG): container finished" podID="7a575e00-cd12-498f-b8a4-0806737389d9" containerID="644dfaca6ea3ca3209dd65a9c882b713b3434866352e57957bae0b279e83000f" exitCode=0 Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.741288 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7a575e00-cd12-498f-b8a4-0806737389d9","Type":"ContainerDied","Data":"644dfaca6ea3ca3209dd65a9c882b713b3434866352e57957bae0b279e83000f"} Jan 26 15:54:50 crc kubenswrapper[4713]: E0126 15:54:50.742577 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="4567e561-0bd8-4368-8868-e2531d7bb8d3" Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.759961 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rc2tb"] Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.776004 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=34.664230011 podStartE2EDuration="56.775982771s" podCreationTimestamp="2026-01-26 15:53:54 +0000 UTC" firstStartedPulling="2026-01-26 15:54:12.664207018 +0000 UTC m=+1227.801224253" lastFinishedPulling="2026-01-26 15:54:34.775959778 +0000 UTC m=+1249.912977013" observedRunningTime="2026-01-26 15:54:50.768878823 +0000 UTC m=+1265.905896058" watchObservedRunningTime="2026-01-26 15:54:50.775982771 +0000 UTC m=+1265.913000006" Jan 26 15:54:50 crc kubenswrapper[4713]: I0126 15:54:50.796485 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" podStartSLOduration=-9223371989.058315 podStartE2EDuration="47.79646024s" podCreationTimestamp="2026-01-26 15:54:03 +0000 UTC" firstStartedPulling="2026-01-26 15:54:15.600238622 +0000 UTC m=+1230.737255857" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:54:50.787989464 +0000 UTC m=+1265.925006709" watchObservedRunningTime="2026-01-26 15:54:50.79646024 +0000 UTC m=+1265.933477475" Jan 26 15:54:51 crc kubenswrapper[4713]: I0126 15:54:51.662003 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:51 crc kubenswrapper[4713]: E0126 15:54:51.671525 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="4a0b03b5-597a-4c59-9784-218e9f9442d1" Jan 26 15:54:51 crc kubenswrapper[4713]: E0126 15:54:51.739504 4713 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 26 15:54:51 crc kubenswrapper[4713]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/2a65ef24-0e05-415a-b7b7-6b44012b6c66/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 26 15:54:51 crc kubenswrapper[4713]: > podSandboxID="d798656b594a274767d4c5811d0bb68fb00ff9de0533d763a0cb2ea5e4c4f9eb" Jan 26 15:54:51 crc kubenswrapper[4713]: E0126 15:54:51.739660 4713 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 15:54:51 crc kubenswrapper[4713]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fst5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-jq69z_openstack(2a65ef24-0e05-415a-b7b7-6b44012b6c66): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/2a65ef24-0e05-415a-b7b7-6b44012b6c66/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 26 15:54:51 crc kubenswrapper[4713]: > logger="UnhandledError" Jan 26 15:54:51 crc kubenswrapper[4713]: E0126 15:54:51.740965 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/2a65ef24-0e05-415a-b7b7-6b44012b6c66/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-jq69z" podUID="2a65ef24-0e05-415a-b7b7-6b44012b6c66" Jan 26 15:54:51 crc kubenswrapper[4713]: I0126 15:54:51.751290 4713 generic.go:334] "Generic (PLEG): container finished" podID="cf79fdc1-80c7-4f65-98e0-b08803c07edc" containerID="7f328573650bd053af237a0dd764541fd3fa1b8e205b7a80afb22ad1db113955" exitCode=0 Jan 26 15:54:51 crc kubenswrapper[4713]: I0126 15:54:51.751408 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cf79fdc1-80c7-4f65-98e0-b08803c07edc","Type":"ContainerDied","Data":"7f328573650bd053af237a0dd764541fd3fa1b8e205b7a80afb22ad1db113955"} Jan 26 15:54:51 crc kubenswrapper[4713]: I0126 15:54:51.755271 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78543593-d6da-448f-adf7-e1ead58bfb5f","Type":"ContainerStarted","Data":"cf102c4cad020291c4283d2335f8456bed242d1e954a454c28b692d2172bece3"} Jan 26 15:54:51 crc kubenswrapper[4713]: I0126 15:54:51.764705 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx" Jan 26 15:54:51 crc kubenswrapper[4713]: I0126 15:54:51.833665 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="446e46b1-a8cc-40fc-8947-d49fd0241bdd" path="/var/lib/kubelet/pods/446e46b1-a8cc-40fc-8947-d49fd0241bdd/volumes" Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.764285 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5bba60c2-25f6-41a7-a231-51fc5a6a9d3b","Type":"ContainerStarted","Data":"8e6f265af857bc9461467367727a58a93571c7e52202b07b769425b9b19740c4"} Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.766884 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7x6k4" event={"ID":"125bdff8-6eff-4f59-9cc4-c986c5771aa0","Type":"ContainerStarted","Data":"bc02ff4a2d1e2a5620734a87b6230e9e341663d16275e0c983018eb797521ded"} Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.769206 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rl7z9" event={"ID":"d161dabd-5253-4929-998e-07f3d465a03d","Type":"ContainerStarted","Data":"4da43783549ce0f6dfe19a9556a0ee0b0f3946580d45258e937541f2a92f7abb"} Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.769337 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.771445 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7a575e00-cd12-498f-b8a4-0806737389d9","Type":"ContainerStarted","Data":"6d1371238948ee8ab3018d23537d9f97ab71958842bf22349d195efb6361857b"} Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.771632 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.774337 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cf79fdc1-80c7-4f65-98e0-b08803c07edc","Type":"ContainerStarted","Data":"7332740595d21ba73166da0f036ca8cdb2f5509516f613c09bb4910da3be30fb"} Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.775887 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" event={"ID":"7cec4b59-6054-4f9b-998f-35059f5a12d6","Type":"ContainerStarted","Data":"38d64878c171696ca10230af8df3d35da9cf890b721c59982be9ad670b260434"} Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.776113 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.796129 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=43.251211372 podStartE2EDuration="1m1.796110174s" podCreationTimestamp="2026-01-26 15:53:51 +0000 UTC" firstStartedPulling="2026-01-26 15:54:15.576496692 +0000 UTC m=+1230.713513927" lastFinishedPulling="2026-01-26 15:54:34.121395494 +0000 UTC m=+1249.258412729" observedRunningTime="2026-01-26 15:54:52.791851945 +0000 UTC m=+1267.928869180" watchObservedRunningTime="2026-01-26 15:54:52.796110174 +0000 UTC m=+1267.933127409" Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.824645 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=45.320673894 podStartE2EDuration="1m3.824621097s" podCreationTimestamp="2026-01-26 15:53:49 +0000 UTC" firstStartedPulling="2026-01-26 15:54:15.61743607 +0000 UTC m=+1230.754453305" lastFinishedPulling="2026-01-26 15:54:34.121383273 +0000 UTC m=+1249.258400508" observedRunningTime="2026-01-26 15:54:52.815948905 +0000 UTC m=+1267.952966150" watchObservedRunningTime="2026-01-26 15:54:52.824621097 +0000 UTC m=+1267.961638332" Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.855300 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-7x6k4" podStartSLOduration=2.548617762 podStartE2EDuration="6.855279139s" podCreationTimestamp="2026-01-26 15:54:46 +0000 UTC" firstStartedPulling="2026-01-26 15:54:47.545094843 +0000 UTC m=+1262.682112078" lastFinishedPulling="2026-01-26 15:54:51.85175622 +0000 UTC m=+1266.988773455" observedRunningTime="2026-01-26 15:54:52.850855966 +0000 UTC m=+1267.987873201" watchObservedRunningTime="2026-01-26 15:54:52.855279139 +0000 UTC m=+1267.992296384" Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.883185 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" podStartSLOduration=8.883165385 podStartE2EDuration="8.883165385s" podCreationTimestamp="2026-01-26 15:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:54:52.874551365 +0000 UTC m=+1268.011568610" watchObservedRunningTime="2026-01-26 15:54:52.883165385 +0000 UTC m=+1268.020182620" Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.899839 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=64.899823228 podStartE2EDuration="1m4.899823228s" podCreationTimestamp="2026-01-26 15:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:54:52.89378277 +0000 UTC m=+1268.030800005" watchObservedRunningTime="2026-01-26 15:54:52.899823228 +0000 UTC m=+1268.036840463" Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.921480 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rl7z9" podStartSLOduration=38.49130741 podStartE2EDuration="55.92145632s" podCreationTimestamp="2026-01-26 15:53:57 +0000 UTC" firstStartedPulling="2026-01-26 15:54:16.691207073 +0000 UTC m=+1231.828224308" lastFinishedPulling="2026-01-26 15:54:34.121355983 +0000 UTC m=+1249.258373218" observedRunningTime="2026-01-26 15:54:52.912650735 +0000 UTC m=+1268.049667970" watchObservedRunningTime="2026-01-26 15:54:52.92145632 +0000 UTC m=+1268.058473555" Jan 26 15:54:52 crc kubenswrapper[4713]: I0126 15:54:52.951115 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:54:53 crc kubenswrapper[4713]: I0126 15:54:53.564432 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 26 15:54:53 crc kubenswrapper[4713]: E0126 15:54:53.566093 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="4567e561-0bd8-4368-8868-e2531d7bb8d3" Jan 26 15:54:53 crc kubenswrapper[4713]: I0126 15:54:53.594966 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:54:53 crc kubenswrapper[4713]: E0126 15:54:53.595142 4713 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 15:54:53 crc kubenswrapper[4713]: E0126 15:54:53.595172 4713 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 15:54:53 crc kubenswrapper[4713]: E0126 15:54:53.595239 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift podName:d0432b2d-538e-4b04-899b-6fe666f340de nodeName:}" failed. No retries permitted until 2026-01-26 15:55:01.595218769 +0000 UTC m=+1276.732236004 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift") pod "swift-storage-0" (UID: "d0432b2d-538e-4b04-899b-6fe666f340de") : configmap "swift-ring-files" not found Jan 26 15:54:53 crc kubenswrapper[4713]: I0126 15:54:53.611635 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 26 15:54:53 crc kubenswrapper[4713]: I0126 15:54:53.662512 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:53 crc kubenswrapper[4713]: E0126 15:54:53.664677 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="4a0b03b5-597a-4c59-9784-218e9f9442d1" Jan 26 15:54:53 crc kubenswrapper[4713]: E0126 15:54:53.786341 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="4567e561-0bd8-4368-8868-e2531d7bb8d3" Jan 26 15:54:54 crc kubenswrapper[4713]: I0126 15:54:54.590003 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 15:54:54 crc kubenswrapper[4713]: I0126 15:54:54.710873 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:54 crc kubenswrapper[4713]: I0126 15:54:54.757847 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 26 15:54:55 crc kubenswrapper[4713]: E0126 15:54:55.150592 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="4a0b03b5-597a-4c59-9784-218e9f9442d1" Jan 26 15:54:55 crc kubenswrapper[4713]: E0126 15:54:55.801909 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="4a0b03b5-597a-4c59-9784-218e9f9442d1" Jan 26 15:54:59 crc kubenswrapper[4713]: I0126 15:54:59.716598 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" Jan 26 15:54:59 crc kubenswrapper[4713]: I0126 15:54:59.787180 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jq69z"] Jan 26 15:54:59 crc kubenswrapper[4713]: I0126 15:54:59.954852 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="100b22db-ec0d-40f0-975e-c86349b1890a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.559104 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jq69z" Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.727207 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fst5m\" (UniqueName: \"kubernetes.io/projected/2a65ef24-0e05-415a-b7b7-6b44012b6c66-kube-api-access-fst5m\") pod \"2a65ef24-0e05-415a-b7b7-6b44012b6c66\" (UID: \"2a65ef24-0e05-415a-b7b7-6b44012b6c66\") " Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.727264 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a65ef24-0e05-415a-b7b7-6b44012b6c66-dns-svc\") pod \"2a65ef24-0e05-415a-b7b7-6b44012b6c66\" (UID: \"2a65ef24-0e05-415a-b7b7-6b44012b6c66\") " Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.727301 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a65ef24-0e05-415a-b7b7-6b44012b6c66-config\") pod \"2a65ef24-0e05-415a-b7b7-6b44012b6c66\" (UID: \"2a65ef24-0e05-415a-b7b7-6b44012b6c66\") " Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.739553 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a65ef24-0e05-415a-b7b7-6b44012b6c66-kube-api-access-fst5m" (OuterVolumeSpecName: "kube-api-access-fst5m") pod "2a65ef24-0e05-415a-b7b7-6b44012b6c66" (UID: "2a65ef24-0e05-415a-b7b7-6b44012b6c66"). InnerVolumeSpecName "kube-api-access-fst5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.789636 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a65ef24-0e05-415a-b7b7-6b44012b6c66-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a65ef24-0e05-415a-b7b7-6b44012b6c66" (UID: "2a65ef24-0e05-415a-b7b7-6b44012b6c66"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.804879 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a65ef24-0e05-415a-b7b7-6b44012b6c66-config" (OuterVolumeSpecName: "config") pod "2a65ef24-0e05-415a-b7b7-6b44012b6c66" (UID: "2a65ef24-0e05-415a-b7b7-6b44012b6c66"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.829616 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a65ef24-0e05-415a-b7b7-6b44012b6c66-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.829666 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fst5m\" (UniqueName: \"kubernetes.io/projected/2a65ef24-0e05-415a-b7b7-6b44012b6c66-kube-api-access-fst5m\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.829685 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a65ef24-0e05-415a-b7b7-6b44012b6c66-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.853071 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78543593-d6da-448f-adf7-e1ead58bfb5f","Type":"ContainerStarted","Data":"6ade85f769ecc88afcb608235aca93e4dacb847e3e88a69786faf1b28018c6ec"} Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.854815 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jq69z" event={"ID":"2a65ef24-0e05-415a-b7b7-6b44012b6c66","Type":"ContainerDied","Data":"d798656b594a274767d4c5811d0bb68fb00ff9de0533d763a0cb2ea5e4c4f9eb"} Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.854851 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jq69z" Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.854871 4713 scope.go:117] "RemoveContainer" containerID="cff6c54368423bd421ff981a3b7a334fe46efeed47e1d4ff02e6ad9719e7a045" Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.890268 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.150860369 podStartE2EDuration="1m6.890246777s" podCreationTimestamp="2026-01-26 15:53:54 +0000 UTC" firstStartedPulling="2026-01-26 15:54:08.784445269 +0000 UTC m=+1223.921462524" lastFinishedPulling="2026-01-26 15:55:00.523831697 +0000 UTC m=+1275.660848932" observedRunningTime="2026-01-26 15:55:00.885776333 +0000 UTC m=+1276.022793578" watchObservedRunningTime="2026-01-26 15:55:00.890246777 +0000 UTC m=+1276.027264022" Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.936256 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jq69z"] Jan 26 15:55:00 crc kubenswrapper[4713]: I0126 15:55:00.952327 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jq69z"] Jan 26 15:55:01 crc kubenswrapper[4713]: I0126 15:55:01.252289 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 26 15:55:01 crc kubenswrapper[4713]: I0126 15:55:01.252654 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 26 15:55:01 crc kubenswrapper[4713]: I0126 15:55:01.347441 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 26 15:55:01 crc kubenswrapper[4713]: I0126 15:55:01.642691 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:55:01 crc kubenswrapper[4713]: E0126 15:55:01.642995 4713 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 15:55:01 crc kubenswrapper[4713]: E0126 15:55:01.643039 4713 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 15:55:01 crc kubenswrapper[4713]: E0126 15:55:01.643131 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift podName:d0432b2d-538e-4b04-899b-6fe666f340de nodeName:}" failed. No retries permitted until 2026-01-26 15:55:17.643104195 +0000 UTC m=+1292.780121440 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift") pod "swift-storage-0" (UID: "d0432b2d-538e-4b04-899b-6fe666f340de") : configmap "swift-ring-files" not found Jan 26 15:55:01 crc kubenswrapper[4713]: I0126 15:55:01.813242 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a65ef24-0e05-415a-b7b7-6b44012b6c66" path="/var/lib/kubelet/pods/2a65ef24-0e05-415a-b7b7-6b44012b6c66/volumes" Jan 26 15:55:01 crc kubenswrapper[4713]: I0126 15:55:01.934105 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.630236 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-4dgsg"] Jan 26 15:55:02 crc kubenswrapper[4713]: E0126 15:55:02.630903 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446e46b1-a8cc-40fc-8947-d49fd0241bdd" containerName="init" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.630922 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="446e46b1-a8cc-40fc-8947-d49fd0241bdd" containerName="init" Jan 26 15:55:02 crc kubenswrapper[4713]: E0126 15:55:02.630953 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a65ef24-0e05-415a-b7b7-6b44012b6c66" containerName="init" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.630960 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a65ef24-0e05-415a-b7b7-6b44012b6c66" containerName="init" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.631148 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="446e46b1-a8cc-40fc-8947-d49fd0241bdd" containerName="init" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.631159 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a65ef24-0e05-415a-b7b7-6b44012b6c66" containerName="init" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.631788 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4dgsg" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.644136 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4dgsg"] Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.675858 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.675902 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.761780 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e809e90-de29-4ad5-ad0f-8dc49a202b3f-operator-scripts\") pod \"keystone-db-create-4dgsg\" (UID: \"4e809e90-de29-4ad5-ad0f-8dc49a202b3f\") " pod="openstack/keystone-db-create-4dgsg" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.761965 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m647x\" (UniqueName: \"kubernetes.io/projected/4e809e90-de29-4ad5-ad0f-8dc49a202b3f-kube-api-access-m647x\") pod \"keystone-db-create-4dgsg\" (UID: \"4e809e90-de29-4ad5-ad0f-8dc49a202b3f\") " pod="openstack/keystone-db-create-4dgsg" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.864782 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e809e90-de29-4ad5-ad0f-8dc49a202b3f-operator-scripts\") pod \"keystone-db-create-4dgsg\" (UID: \"4e809e90-de29-4ad5-ad0f-8dc49a202b3f\") " pod="openstack/keystone-db-create-4dgsg" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.864933 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m647x\" (UniqueName: \"kubernetes.io/projected/4e809e90-de29-4ad5-ad0f-8dc49a202b3f-kube-api-access-m647x\") pod \"keystone-db-create-4dgsg\" (UID: \"4e809e90-de29-4ad5-ad0f-8dc49a202b3f\") " pod="openstack/keystone-db-create-4dgsg" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.865976 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e809e90-de29-4ad5-ad0f-8dc49a202b3f-operator-scripts\") pod \"keystone-db-create-4dgsg\" (UID: \"4e809e90-de29-4ad5-ad0f-8dc49a202b3f\") " pod="openstack/keystone-db-create-4dgsg" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.874159 4713 generic.go:334] "Generic (PLEG): container finished" podID="125bdff8-6eff-4f59-9cc4-c986c5771aa0" containerID="bc02ff4a2d1e2a5620734a87b6230e9e341663d16275e0c983018eb797521ded" exitCode=0 Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.874244 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7x6k4" event={"ID":"125bdff8-6eff-4f59-9cc4-c986c5771aa0","Type":"ContainerDied","Data":"bc02ff4a2d1e2a5620734a87b6230e9e341663d16275e0c983018eb797521ded"} Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.892250 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m647x\" (UniqueName: \"kubernetes.io/projected/4e809e90-de29-4ad5-ad0f-8dc49a202b3f-kube-api-access-m647x\") pod \"keystone-db-create-4dgsg\" (UID: \"4e809e90-de29-4ad5-ad0f-8dc49a202b3f\") " pod="openstack/keystone-db-create-4dgsg" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.949833 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4dgsg" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.969605 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-glbp5"] Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.970935 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-glbp5" Jan 26 15:55:02 crc kubenswrapper[4713]: I0126 15:55:02.998480 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a2f4-account-create-update-mlshf"] Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.000037 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a2f4-account-create-update-mlshf" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.009738 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.012229 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-glbp5"] Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.048899 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a2f4-account-create-update-mlshf"] Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.072238 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10c39f53-9957-4f6a-912c-1c2217af11f1-operator-scripts\") pod \"keystone-a2f4-account-create-update-mlshf\" (UID: \"10c39f53-9957-4f6a-912c-1c2217af11f1\") " pod="openstack/keystone-a2f4-account-create-update-mlshf" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.073269 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc12f2df-b90e-4bb9-a255-5e5353ed1dd5-operator-scripts\") pod \"placement-db-create-glbp5\" (UID: \"fc12f2df-b90e-4bb9-a255-5e5353ed1dd5\") " pod="openstack/placement-db-create-glbp5" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.073400 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j2rk\" (UniqueName: \"kubernetes.io/projected/fc12f2df-b90e-4bb9-a255-5e5353ed1dd5-kube-api-access-6j2rk\") pod \"placement-db-create-glbp5\" (UID: \"fc12f2df-b90e-4bb9-a255-5e5353ed1dd5\") " pod="openstack/placement-db-create-glbp5" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.073527 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmvnb\" (UniqueName: \"kubernetes.io/projected/10c39f53-9957-4f6a-912c-1c2217af11f1-kube-api-access-mmvnb\") pod \"keystone-a2f4-account-create-update-mlshf\" (UID: \"10c39f53-9957-4f6a-912c-1c2217af11f1\") " pod="openstack/keystone-a2f4-account-create-update-mlshf" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.188894 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10c39f53-9957-4f6a-912c-1c2217af11f1-operator-scripts\") pod \"keystone-a2f4-account-create-update-mlshf\" (UID: \"10c39f53-9957-4f6a-912c-1c2217af11f1\") " pod="openstack/keystone-a2f4-account-create-update-mlshf" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.189163 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc12f2df-b90e-4bb9-a255-5e5353ed1dd5-operator-scripts\") pod \"placement-db-create-glbp5\" (UID: \"fc12f2df-b90e-4bb9-a255-5e5353ed1dd5\") " pod="openstack/placement-db-create-glbp5" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.189220 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j2rk\" (UniqueName: \"kubernetes.io/projected/fc12f2df-b90e-4bb9-a255-5e5353ed1dd5-kube-api-access-6j2rk\") pod \"placement-db-create-glbp5\" (UID: \"fc12f2df-b90e-4bb9-a255-5e5353ed1dd5\") " pod="openstack/placement-db-create-glbp5" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.189288 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmvnb\" (UniqueName: \"kubernetes.io/projected/10c39f53-9957-4f6a-912c-1c2217af11f1-kube-api-access-mmvnb\") pod \"keystone-a2f4-account-create-update-mlshf\" (UID: \"10c39f53-9957-4f6a-912c-1c2217af11f1\") " pod="openstack/keystone-a2f4-account-create-update-mlshf" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.190315 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc12f2df-b90e-4bb9-a255-5e5353ed1dd5-operator-scripts\") pod \"placement-db-create-glbp5\" (UID: \"fc12f2df-b90e-4bb9-a255-5e5353ed1dd5\") " pod="openstack/placement-db-create-glbp5" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.191042 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10c39f53-9957-4f6a-912c-1c2217af11f1-operator-scripts\") pod \"keystone-a2f4-account-create-update-mlshf\" (UID: \"10c39f53-9957-4f6a-912c-1c2217af11f1\") " pod="openstack/keystone-a2f4-account-create-update-mlshf" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.201252 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e73f-account-create-update-kb58x"] Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.202584 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e73f-account-create-update-kb58x" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.206479 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.223192 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e73f-account-create-update-kb58x"] Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.231242 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmvnb\" (UniqueName: \"kubernetes.io/projected/10c39f53-9957-4f6a-912c-1c2217af11f1-kube-api-access-mmvnb\") pod \"keystone-a2f4-account-create-update-mlshf\" (UID: \"10c39f53-9957-4f6a-912c-1c2217af11f1\") " pod="openstack/keystone-a2f4-account-create-update-mlshf" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.267687 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j2rk\" (UniqueName: \"kubernetes.io/projected/fc12f2df-b90e-4bb9-a255-5e5353ed1dd5-kube-api-access-6j2rk\") pod \"placement-db-create-glbp5\" (UID: \"fc12f2df-b90e-4bb9-a255-5e5353ed1dd5\") " pod="openstack/placement-db-create-glbp5" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.286377 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-44wbp"] Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.287714 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-44wbp" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.294788 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkr74\" (UniqueName: \"kubernetes.io/projected/9db8c9f9-5fba-4647-841a-71f4bc24f438-kube-api-access-fkr74\") pod \"placement-e73f-account-create-update-kb58x\" (UID: \"9db8c9f9-5fba-4647-841a-71f4bc24f438\") " pod="openstack/placement-e73f-account-create-update-kb58x" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.294906 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9db8c9f9-5fba-4647-841a-71f4bc24f438-operator-scripts\") pod \"placement-e73f-account-create-update-kb58x\" (UID: \"9db8c9f9-5fba-4647-841a-71f4bc24f438\") " pod="openstack/placement-e73f-account-create-update-kb58x" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.364654 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-44wbp"] Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.396914 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9db8c9f9-5fba-4647-841a-71f4bc24f438-operator-scripts\") pod \"placement-e73f-account-create-update-kb58x\" (UID: \"9db8c9f9-5fba-4647-841a-71f4bc24f438\") " pod="openstack/placement-e73f-account-create-update-kb58x" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.397806 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24468eba-9d4f-446e-ac2d-39c4855686ff-operator-scripts\") pod \"glance-db-create-44wbp\" (UID: \"24468eba-9d4f-446e-ac2d-39c4855686ff\") " pod="openstack/glance-db-create-44wbp" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.397846 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkr74\" (UniqueName: \"kubernetes.io/projected/9db8c9f9-5fba-4647-841a-71f4bc24f438-kube-api-access-fkr74\") pod \"placement-e73f-account-create-update-kb58x\" (UID: \"9db8c9f9-5fba-4647-841a-71f4bc24f438\") " pod="openstack/placement-e73f-account-create-update-kb58x" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.397885 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvlsb\" (UniqueName: \"kubernetes.io/projected/24468eba-9d4f-446e-ac2d-39c4855686ff-kube-api-access-bvlsb\") pod \"glance-db-create-44wbp\" (UID: \"24468eba-9d4f-446e-ac2d-39c4855686ff\") " pod="openstack/glance-db-create-44wbp" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.397648 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9db8c9f9-5fba-4647-841a-71f4bc24f438-operator-scripts\") pod \"placement-e73f-account-create-update-kb58x\" (UID: \"9db8c9f9-5fba-4647-841a-71f4bc24f438\") " pod="openstack/placement-e73f-account-create-update-kb58x" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.408553 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0f26-account-create-update-lpqwm"] Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.409793 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0f26-account-create-update-lpqwm" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.413444 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.431772 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0f26-account-create-update-lpqwm"] Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.440722 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-glbp5" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.481924 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkr74\" (UniqueName: \"kubernetes.io/projected/9db8c9f9-5fba-4647-841a-71f4bc24f438-kube-api-access-fkr74\") pod \"placement-e73f-account-create-update-kb58x\" (UID: \"9db8c9f9-5fba-4647-841a-71f4bc24f438\") " pod="openstack/placement-e73f-account-create-update-kb58x" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.491332 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a2f4-account-create-update-mlshf" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.499551 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6598bbb-908a-4758-91c0-e72d8a9d4da7-operator-scripts\") pod \"glance-0f26-account-create-update-lpqwm\" (UID: \"b6598bbb-908a-4758-91c0-e72d8a9d4da7\") " pod="openstack/glance-0f26-account-create-update-lpqwm" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.499610 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkvr2\" (UniqueName: \"kubernetes.io/projected/b6598bbb-908a-4758-91c0-e72d8a9d4da7-kube-api-access-hkvr2\") pod \"glance-0f26-account-create-update-lpqwm\" (UID: \"b6598bbb-908a-4758-91c0-e72d8a9d4da7\") " pod="openstack/glance-0f26-account-create-update-lpqwm" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.499697 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24468eba-9d4f-446e-ac2d-39c4855686ff-operator-scripts\") pod \"glance-db-create-44wbp\" (UID: \"24468eba-9d4f-446e-ac2d-39c4855686ff\") " pod="openstack/glance-db-create-44wbp" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.499760 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvlsb\" (UniqueName: \"kubernetes.io/projected/24468eba-9d4f-446e-ac2d-39c4855686ff-kube-api-access-bvlsb\") pod \"glance-db-create-44wbp\" (UID: \"24468eba-9d4f-446e-ac2d-39c4855686ff\") " pod="openstack/glance-db-create-44wbp" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.501106 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24468eba-9d4f-446e-ac2d-39c4855686ff-operator-scripts\") pod \"glance-db-create-44wbp\" (UID: \"24468eba-9d4f-446e-ac2d-39c4855686ff\") " pod="openstack/glance-db-create-44wbp" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.552946 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.555322 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvlsb\" (UniqueName: \"kubernetes.io/projected/24468eba-9d4f-446e-ac2d-39c4855686ff-kube-api-access-bvlsb\") pod \"glance-db-create-44wbp\" (UID: \"24468eba-9d4f-446e-ac2d-39c4855686ff\") " pod="openstack/glance-db-create-44wbp" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.555352 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-qngml" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.605861 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6598bbb-908a-4758-91c0-e72d8a9d4da7-operator-scripts\") pod \"glance-0f26-account-create-update-lpqwm\" (UID: \"b6598bbb-908a-4758-91c0-e72d8a9d4da7\") " pod="openstack/glance-0f26-account-create-update-lpqwm" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.606101 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkvr2\" (UniqueName: \"kubernetes.io/projected/b6598bbb-908a-4758-91c0-e72d8a9d4da7-kube-api-access-hkvr2\") pod \"glance-0f26-account-create-update-lpqwm\" (UID: \"b6598bbb-908a-4758-91c0-e72d8a9d4da7\") " pod="openstack/glance-0f26-account-create-update-lpqwm" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.613038 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6598bbb-908a-4758-91c0-e72d8a9d4da7-operator-scripts\") pod \"glance-0f26-account-create-update-lpqwm\" (UID: \"b6598bbb-908a-4758-91c0-e72d8a9d4da7\") " pod="openstack/glance-0f26-account-create-update-lpqwm" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.634966 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e73f-account-create-update-kb58x" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.635513 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkvr2\" (UniqueName: \"kubernetes.io/projected/b6598bbb-908a-4758-91c0-e72d8a9d4da7-kube-api-access-hkvr2\") pod \"glance-0f26-account-create-update-lpqwm\" (UID: \"b6598bbb-908a-4758-91c0-e72d8a9d4da7\") " pod="openstack/glance-0f26-account-create-update-lpqwm" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.641735 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-44wbp" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.742846 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0f26-account-create-update-lpqwm" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.755315 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.897791 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-hvlk9" Jan 26 15:55:03 crc kubenswrapper[4713]: I0126 15:55:03.955584 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-xdr68" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.072551 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4dgsg"] Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.547295 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.580962 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a2f4-account-create-update-mlshf"] Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.629014 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-44wbp"] Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.659054 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-glbp5"] Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.670333 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0f26-account-create-update-lpqwm"] Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.692218 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e73f-account-create-update-kb58x"] Jan 26 15:55:04 crc kubenswrapper[4713]: W0126 15:55:04.710892 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9db8c9f9_5fba_4647_841a_71f4bc24f438.slice/crio-74d67903f24bc88e0fa87c5864a6733da1a6f21677c910e97046441307ae5cc0 WatchSource:0}: Error finding container 74d67903f24bc88e0fa87c5864a6733da1a6f21677c910e97046441307ae5cc0: Status 404 returned error can't find the container with id 74d67903f24bc88e0fa87c5864a6733da1a6f21677c910e97046441307ae5cc0 Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.731095 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/125bdff8-6eff-4f59-9cc4-c986c5771aa0-scripts\") pod \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.731166 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/125bdff8-6eff-4f59-9cc4-c986c5771aa0-etc-swift\") pod \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.731208 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-dispersionconf\") pod \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.731232 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-combined-ca-bundle\") pod \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.731286 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-swiftconf\") pod \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.731382 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zfrk\" (UniqueName: \"kubernetes.io/projected/125bdff8-6eff-4f59-9cc4-c986c5771aa0-kube-api-access-8zfrk\") pod \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.731558 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/125bdff8-6eff-4f59-9cc4-c986c5771aa0-ring-data-devices\") pod \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\" (UID: \"125bdff8-6eff-4f59-9cc4-c986c5771aa0\") " Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.733579 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/125bdff8-6eff-4f59-9cc4-c986c5771aa0-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "125bdff8-6eff-4f59-9cc4-c986c5771aa0" (UID: "125bdff8-6eff-4f59-9cc4-c986c5771aa0"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.735392 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/125bdff8-6eff-4f59-9cc4-c986c5771aa0-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "125bdff8-6eff-4f59-9cc4-c986c5771aa0" (UID: "125bdff8-6eff-4f59-9cc4-c986c5771aa0"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.743579 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/125bdff8-6eff-4f59-9cc4-c986c5771aa0-kube-api-access-8zfrk" (OuterVolumeSpecName: "kube-api-access-8zfrk") pod "125bdff8-6eff-4f59-9cc4-c986c5771aa0" (UID: "125bdff8-6eff-4f59-9cc4-c986c5771aa0"). InnerVolumeSpecName "kube-api-access-8zfrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.754535 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "125bdff8-6eff-4f59-9cc4-c986c5771aa0" (UID: "125bdff8-6eff-4f59-9cc4-c986c5771aa0"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.796598 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="a45d2a2d-be1b-476e-8fbf-f9bdd5a97301" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.814804 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "125bdff8-6eff-4f59-9cc4-c986c5771aa0" (UID: "125bdff8-6eff-4f59-9cc4-c986c5771aa0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.825348 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/125bdff8-6eff-4f59-9cc4-c986c5771aa0-scripts" (OuterVolumeSpecName: "scripts") pod "125bdff8-6eff-4f59-9cc4-c986c5771aa0" (UID: "125bdff8-6eff-4f59-9cc4-c986c5771aa0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.828435 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "125bdff8-6eff-4f59-9cc4-c986c5771aa0" (UID: "125bdff8-6eff-4f59-9cc4-c986c5771aa0"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.837054 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zfrk\" (UniqueName: \"kubernetes.io/projected/125bdff8-6eff-4f59-9cc4-c986c5771aa0-kube-api-access-8zfrk\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.837233 4713 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/125bdff8-6eff-4f59-9cc4-c986c5771aa0-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.837344 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/125bdff8-6eff-4f59-9cc4-c986c5771aa0-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.837464 4713 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/125bdff8-6eff-4f59-9cc4-c986c5771aa0-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.837563 4713 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.837684 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.837781 4713 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/125bdff8-6eff-4f59-9cc4-c986c5771aa0-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.952836 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e73f-account-create-update-kb58x" event={"ID":"9db8c9f9-5fba-4647-841a-71f4bc24f438","Type":"ContainerStarted","Data":"cb89a9841538137dbb9da638ec59fdc923ee1f271770d546d7b1a46096d2068a"} Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.952890 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e73f-account-create-update-kb58x" event={"ID":"9db8c9f9-5fba-4647-841a-71f4bc24f438","Type":"ContainerStarted","Data":"74d67903f24bc88e0fa87c5864a6733da1a6f21677c910e97046441307ae5cc0"} Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.958396 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7x6k4" event={"ID":"125bdff8-6eff-4f59-9cc4-c986c5771aa0","Type":"ContainerDied","Data":"f6fc27f04c42be5a51d2687a6e65b04cff7354379a9815ac50511c42d89f4449"} Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.958436 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6fc27f04c42be5a51d2687a6e65b04cff7354379a9815ac50511c42d89f4449" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.958538 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7x6k4" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.961450 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-44wbp" event={"ID":"24468eba-9d4f-446e-ac2d-39c4855686ff","Type":"ContainerStarted","Data":"2c0963094051b7349c365e4a6ebe649340386eb6d450e2063c80cade032387b7"} Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.961499 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-44wbp" event={"ID":"24468eba-9d4f-446e-ac2d-39c4855686ff","Type":"ContainerStarted","Data":"bbc1a1d8f108f75aa60539d5b4ba138d0587b5f106baffd9e75e2ceb78801980"} Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.981653 4713 generic.go:334] "Generic (PLEG): container finished" podID="4e809e90-de29-4ad5-ad0f-8dc49a202b3f" containerID="2c005a844b05169d2ba38dc826ee25b60bd27ca2bb10d361260d66910d20268c" exitCode=0 Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.981746 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4dgsg" event={"ID":"4e809e90-de29-4ad5-ad0f-8dc49a202b3f","Type":"ContainerDied","Data":"2c005a844b05169d2ba38dc826ee25b60bd27ca2bb10d361260d66910d20268c"} Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.981770 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4dgsg" event={"ID":"4e809e90-de29-4ad5-ad0f-8dc49a202b3f","Type":"ContainerStarted","Data":"f8923df9e64fb0127295e77721417ca94b6f0a9534f6c3213d339c6048e344f0"} Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.982330 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-e73f-account-create-update-kb58x" podStartSLOduration=1.982307386 podStartE2EDuration="1.982307386s" podCreationTimestamp="2026-01-26 15:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:04.970488337 +0000 UTC m=+1280.107505572" watchObservedRunningTime="2026-01-26 15:55:04.982307386 +0000 UTC m=+1280.119324621" Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.996709 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a2f4-account-create-update-mlshf" event={"ID":"10c39f53-9957-4f6a-912c-1c2217af11f1","Type":"ContainerStarted","Data":"85e010915930eddbcf5ce2b55f5e30e16eba1b3225e8ad977bdf9fbdc8334d54"} Jan 26 15:55:04 crc kubenswrapper[4713]: I0126 15:55:04.996765 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a2f4-account-create-update-mlshf" event={"ID":"10c39f53-9957-4f6a-912c-1c2217af11f1","Type":"ContainerStarted","Data":"84b22c6372cca43f64e7aadaa130e9f6590bed2ac3798dcab721b3508619f7ca"} Jan 26 15:55:05 crc kubenswrapper[4713]: I0126 15:55:05.002665 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-44wbp" podStartSLOduration=2.002642381 podStartE2EDuration="2.002642381s" podCreationTimestamp="2026-01-26 15:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:04.98534841 +0000 UTC m=+1280.122365645" watchObservedRunningTime="2026-01-26 15:55:05.002642381 +0000 UTC m=+1280.139659626" Jan 26 15:55:05 crc kubenswrapper[4713]: I0126 15:55:05.009486 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0f26-account-create-update-lpqwm" event={"ID":"b6598bbb-908a-4758-91c0-e72d8a9d4da7","Type":"ContainerStarted","Data":"1efbb0d0e89581fbc9606a982cfeacccdda4cec7466859356c3537cfa76646d2"} Jan 26 15:55:05 crc kubenswrapper[4713]: I0126 15:55:05.009533 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0f26-account-create-update-lpqwm" event={"ID":"b6598bbb-908a-4758-91c0-e72d8a9d4da7","Type":"ContainerStarted","Data":"89f334c8421fc5abcbc745bc886a95caba249cca7fa837ceb0bcc7d085929c36"} Jan 26 15:55:05 crc kubenswrapper[4713]: I0126 15:55:05.018630 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-glbp5" event={"ID":"fc12f2df-b90e-4bb9-a255-5e5353ed1dd5","Type":"ContainerStarted","Data":"61b47ece138533de1d05d51fa484867c4c7a0c39e0c6680447e38400200fe2a7"} Jan 26 15:55:05 crc kubenswrapper[4713]: I0126 15:55:05.018681 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-glbp5" event={"ID":"fc12f2df-b90e-4bb9-a255-5e5353ed1dd5","Type":"ContainerStarted","Data":"dd41ef4387d9879f65848cbee3b3cc3c9a873ff37928538311b2e7b99e1b1b34"} Jan 26 15:55:05 crc kubenswrapper[4713]: I0126 15:55:05.039637 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-a2f4-account-create-update-mlshf" podStartSLOduration=3.039615479 podStartE2EDuration="3.039615479s" podCreationTimestamp="2026-01-26 15:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:05.036515793 +0000 UTC m=+1280.173533018" watchObservedRunningTime="2026-01-26 15:55:05.039615479 +0000 UTC m=+1280.176632714" Jan 26 15:55:05 crc kubenswrapper[4713]: I0126 15:55:05.067641 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-glbp5" podStartSLOduration=3.067597618 podStartE2EDuration="3.067597618s" podCreationTimestamp="2026-01-26 15:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:05.057227059 +0000 UTC m=+1280.194244294" watchObservedRunningTime="2026-01-26 15:55:05.067597618 +0000 UTC m=+1280.204614853" Jan 26 15:55:05 crc kubenswrapper[4713]: I0126 15:55:05.067767 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 26 15:55:05 crc kubenswrapper[4713]: I0126 15:55:05.095317 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-0f26-account-create-update-lpqwm" podStartSLOduration=2.095294578 podStartE2EDuration="2.095294578s" podCreationTimestamp="2026-01-26 15:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:05.072692749 +0000 UTC m=+1280.209709984" watchObservedRunningTime="2026-01-26 15:55:05.095294578 +0000 UTC m=+1280.232311813" Jan 26 15:55:05 crc kubenswrapper[4713]: I0126 15:55:05.145077 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 26 15:55:05 crc kubenswrapper[4713]: I0126 15:55:05.690895 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.027983 4713 generic.go:334] "Generic (PLEG): container finished" podID="10c39f53-9957-4f6a-912c-1c2217af11f1" containerID="85e010915930eddbcf5ce2b55f5e30e16eba1b3225e8ad977bdf9fbdc8334d54" exitCode=0 Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.028059 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a2f4-account-create-update-mlshf" event={"ID":"10c39f53-9957-4f6a-912c-1c2217af11f1","Type":"ContainerDied","Data":"85e010915930eddbcf5ce2b55f5e30e16eba1b3225e8ad977bdf9fbdc8334d54"} Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.030865 4713 generic.go:334] "Generic (PLEG): container finished" podID="b6598bbb-908a-4758-91c0-e72d8a9d4da7" containerID="1efbb0d0e89581fbc9606a982cfeacccdda4cec7466859356c3537cfa76646d2" exitCode=0 Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.030935 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0f26-account-create-update-lpqwm" event={"ID":"b6598bbb-908a-4758-91c0-e72d8a9d4da7","Type":"ContainerDied","Data":"1efbb0d0e89581fbc9606a982cfeacccdda4cec7466859356c3537cfa76646d2"} Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.034200 4713 generic.go:334] "Generic (PLEG): container finished" podID="fc12f2df-b90e-4bb9-a255-5e5353ed1dd5" containerID="61b47ece138533de1d05d51fa484867c4c7a0c39e0c6680447e38400200fe2a7" exitCode=0 Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.034268 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-glbp5" event={"ID":"fc12f2df-b90e-4bb9-a255-5e5353ed1dd5","Type":"ContainerDied","Data":"61b47ece138533de1d05d51fa484867c4c7a0c39e0c6680447e38400200fe2a7"} Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.035799 4713 generic.go:334] "Generic (PLEG): container finished" podID="9db8c9f9-5fba-4647-841a-71f4bc24f438" containerID="cb89a9841538137dbb9da638ec59fdc923ee1f271770d546d7b1a46096d2068a" exitCode=0 Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.035845 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e73f-account-create-update-kb58x" event={"ID":"9db8c9f9-5fba-4647-841a-71f4bc24f438","Type":"ContainerDied","Data":"cb89a9841538137dbb9da638ec59fdc923ee1f271770d546d7b1a46096d2068a"} Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.036972 4713 generic.go:334] "Generic (PLEG): container finished" podID="24468eba-9d4f-446e-ac2d-39c4855686ff" containerID="2c0963094051b7349c365e4a6ebe649340386eb6d450e2063c80cade032387b7" exitCode=0 Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.037200 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-44wbp" event={"ID":"24468eba-9d4f-446e-ac2d-39c4855686ff","Type":"ContainerDied","Data":"2c0963094051b7349c365e4a6ebe649340386eb6d450e2063c80cade032387b7"} Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.394225 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4dgsg" Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.468056 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m647x\" (UniqueName: \"kubernetes.io/projected/4e809e90-de29-4ad5-ad0f-8dc49a202b3f-kube-api-access-m647x\") pod \"4e809e90-de29-4ad5-ad0f-8dc49a202b3f\" (UID: \"4e809e90-de29-4ad5-ad0f-8dc49a202b3f\") " Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.468195 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e809e90-de29-4ad5-ad0f-8dc49a202b3f-operator-scripts\") pod \"4e809e90-de29-4ad5-ad0f-8dc49a202b3f\" (UID: \"4e809e90-de29-4ad5-ad0f-8dc49a202b3f\") " Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.469112 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e809e90-de29-4ad5-ad0f-8dc49a202b3f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e809e90-de29-4ad5-ad0f-8dc49a202b3f" (UID: "4e809e90-de29-4ad5-ad0f-8dc49a202b3f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.472757 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e809e90-de29-4ad5-ad0f-8dc49a202b3f-kube-api-access-m647x" (OuterVolumeSpecName: "kube-api-access-m647x") pod "4e809e90-de29-4ad5-ad0f-8dc49a202b3f" (UID: "4e809e90-de29-4ad5-ad0f-8dc49a202b3f"). InnerVolumeSpecName "kube-api-access-m647x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.570635 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m647x\" (UniqueName: \"kubernetes.io/projected/4e809e90-de29-4ad5-ad0f-8dc49a202b3f-kube-api-access-m647x\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:06 crc kubenswrapper[4713]: I0126 15:55:06.570959 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e809e90-de29-4ad5-ad0f-8dc49a202b3f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.047486 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4dgsg" event={"ID":"4e809e90-de29-4ad5-ad0f-8dc49a202b3f","Type":"ContainerDied","Data":"f8923df9e64fb0127295e77721417ca94b6f0a9534f6c3213d339c6048e344f0"} Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.047534 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8923df9e64fb0127295e77721417ca94b6f0a9534f6c3213d339c6048e344f0" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.047612 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4dgsg" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.491265 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-44wbp" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.595136 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24468eba-9d4f-446e-ac2d-39c4855686ff-operator-scripts\") pod \"24468eba-9d4f-446e-ac2d-39c4855686ff\" (UID: \"24468eba-9d4f-446e-ac2d-39c4855686ff\") " Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.595320 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvlsb\" (UniqueName: \"kubernetes.io/projected/24468eba-9d4f-446e-ac2d-39c4855686ff-kube-api-access-bvlsb\") pod \"24468eba-9d4f-446e-ac2d-39c4855686ff\" (UID: \"24468eba-9d4f-446e-ac2d-39c4855686ff\") " Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.596947 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24468eba-9d4f-446e-ac2d-39c4855686ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24468eba-9d4f-446e-ac2d-39c4855686ff" (UID: "24468eba-9d4f-446e-ac2d-39c4855686ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.600312 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24468eba-9d4f-446e-ac2d-39c4855686ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.601800 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24468eba-9d4f-446e-ac2d-39c4855686ff-kube-api-access-bvlsb" (OuterVolumeSpecName: "kube-api-access-bvlsb") pod "24468eba-9d4f-446e-ac2d-39c4855686ff" (UID: "24468eba-9d4f-446e-ac2d-39c4855686ff"). InnerVolumeSpecName "kube-api-access-bvlsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.702469 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvlsb\" (UniqueName: \"kubernetes.io/projected/24468eba-9d4f-446e-ac2d-39c4855686ff-kube-api-access-bvlsb\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.746091 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0f26-account-create-update-lpqwm" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.753824 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-glbp5" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.761625 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e73f-account-create-update-kb58x" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.771731 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a2f4-account-create-update-mlshf" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.904821 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10c39f53-9957-4f6a-912c-1c2217af11f1-operator-scripts\") pod \"10c39f53-9957-4f6a-912c-1c2217af11f1\" (UID: \"10c39f53-9957-4f6a-912c-1c2217af11f1\") " Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.905137 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6598bbb-908a-4758-91c0-e72d8a9d4da7-operator-scripts\") pod \"b6598bbb-908a-4758-91c0-e72d8a9d4da7\" (UID: \"b6598bbb-908a-4758-91c0-e72d8a9d4da7\") " Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.905195 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkr74\" (UniqueName: \"kubernetes.io/projected/9db8c9f9-5fba-4647-841a-71f4bc24f438-kube-api-access-fkr74\") pod \"9db8c9f9-5fba-4647-841a-71f4bc24f438\" (UID: \"9db8c9f9-5fba-4647-841a-71f4bc24f438\") " Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.905230 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c39f53-9957-4f6a-912c-1c2217af11f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10c39f53-9957-4f6a-912c-1c2217af11f1" (UID: "10c39f53-9957-4f6a-912c-1c2217af11f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.905290 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmvnb\" (UniqueName: \"kubernetes.io/projected/10c39f53-9957-4f6a-912c-1c2217af11f1-kube-api-access-mmvnb\") pod \"10c39f53-9957-4f6a-912c-1c2217af11f1\" (UID: \"10c39f53-9957-4f6a-912c-1c2217af11f1\") " Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.905313 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc12f2df-b90e-4bb9-a255-5e5353ed1dd5-operator-scripts\") pod \"fc12f2df-b90e-4bb9-a255-5e5353ed1dd5\" (UID: \"fc12f2df-b90e-4bb9-a255-5e5353ed1dd5\") " Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.905345 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9db8c9f9-5fba-4647-841a-71f4bc24f438-operator-scripts\") pod \"9db8c9f9-5fba-4647-841a-71f4bc24f438\" (UID: \"9db8c9f9-5fba-4647-841a-71f4bc24f438\") " Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.905401 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkvr2\" (UniqueName: \"kubernetes.io/projected/b6598bbb-908a-4758-91c0-e72d8a9d4da7-kube-api-access-hkvr2\") pod \"b6598bbb-908a-4758-91c0-e72d8a9d4da7\" (UID: \"b6598bbb-908a-4758-91c0-e72d8a9d4da7\") " Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.905428 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j2rk\" (UniqueName: \"kubernetes.io/projected/fc12f2df-b90e-4bb9-a255-5e5353ed1dd5-kube-api-access-6j2rk\") pod \"fc12f2df-b90e-4bb9-a255-5e5353ed1dd5\" (UID: \"fc12f2df-b90e-4bb9-a255-5e5353ed1dd5\") " Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.905703 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6598bbb-908a-4758-91c0-e72d8a9d4da7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6598bbb-908a-4758-91c0-e72d8a9d4da7" (UID: "b6598bbb-908a-4758-91c0-e72d8a9d4da7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.905973 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6598bbb-908a-4758-91c0-e72d8a9d4da7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.905992 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10c39f53-9957-4f6a-912c-1c2217af11f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.906029 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc12f2df-b90e-4bb9-a255-5e5353ed1dd5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc12f2df-b90e-4bb9-a255-5e5353ed1dd5" (UID: "fc12f2df-b90e-4bb9-a255-5e5353ed1dd5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.906539 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9db8c9f9-5fba-4647-841a-71f4bc24f438-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9db8c9f9-5fba-4647-841a-71f4bc24f438" (UID: "9db8c9f9-5fba-4647-841a-71f4bc24f438"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.909761 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9db8c9f9-5fba-4647-841a-71f4bc24f438-kube-api-access-fkr74" (OuterVolumeSpecName: "kube-api-access-fkr74") pod "9db8c9f9-5fba-4647-841a-71f4bc24f438" (UID: "9db8c9f9-5fba-4647-841a-71f4bc24f438"). InnerVolumeSpecName "kube-api-access-fkr74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.910077 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6598bbb-908a-4758-91c0-e72d8a9d4da7-kube-api-access-hkvr2" (OuterVolumeSpecName: "kube-api-access-hkvr2") pod "b6598bbb-908a-4758-91c0-e72d8a9d4da7" (UID: "b6598bbb-908a-4758-91c0-e72d8a9d4da7"). InnerVolumeSpecName "kube-api-access-hkvr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.910854 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc12f2df-b90e-4bb9-a255-5e5353ed1dd5-kube-api-access-6j2rk" (OuterVolumeSpecName: "kube-api-access-6j2rk") pod "fc12f2df-b90e-4bb9-a255-5e5353ed1dd5" (UID: "fc12f2df-b90e-4bb9-a255-5e5353ed1dd5"). InnerVolumeSpecName "kube-api-access-6j2rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:07 crc kubenswrapper[4713]: I0126 15:55:07.911002 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c39f53-9957-4f6a-912c-1c2217af11f1-kube-api-access-mmvnb" (OuterVolumeSpecName: "kube-api-access-mmvnb") pod "10c39f53-9957-4f6a-912c-1c2217af11f1" (UID: "10c39f53-9957-4f6a-912c-1c2217af11f1"). InnerVolumeSpecName "kube-api-access-mmvnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.007883 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkr74\" (UniqueName: \"kubernetes.io/projected/9db8c9f9-5fba-4647-841a-71f4bc24f438-kube-api-access-fkr74\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.007915 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmvnb\" (UniqueName: \"kubernetes.io/projected/10c39f53-9957-4f6a-912c-1c2217af11f1-kube-api-access-mmvnb\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.007927 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc12f2df-b90e-4bb9-a255-5e5353ed1dd5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.007937 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9db8c9f9-5fba-4647-841a-71f4bc24f438-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.007946 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkvr2\" (UniqueName: \"kubernetes.io/projected/b6598bbb-908a-4758-91c0-e72d8a9d4da7-kube-api-access-hkvr2\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.007954 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j2rk\" (UniqueName: \"kubernetes.io/projected/fc12f2df-b90e-4bb9-a255-5e5353ed1dd5-kube-api-access-6j2rk\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.056950 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-44wbp" event={"ID":"24468eba-9d4f-446e-ac2d-39c4855686ff","Type":"ContainerDied","Data":"bbc1a1d8f108f75aa60539d5b4ba138d0587b5f106baffd9e75e2ceb78801980"} Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.056989 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbc1a1d8f108f75aa60539d5b4ba138d0587b5f106baffd9e75e2ceb78801980" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.056989 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-44wbp" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.058498 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a2f4-account-create-update-mlshf" event={"ID":"10c39f53-9957-4f6a-912c-1c2217af11f1","Type":"ContainerDied","Data":"84b22c6372cca43f64e7aadaa130e9f6590bed2ac3798dcab721b3508619f7ca"} Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.058528 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84b22c6372cca43f64e7aadaa130e9f6590bed2ac3798dcab721b3508619f7ca" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.058592 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a2f4-account-create-update-mlshf" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.062326 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4567e561-0bd8-4368-8868-e2531d7bb8d3","Type":"ContainerStarted","Data":"ec0f5d675de2c748b0a5dbafee7f819df1d93fb1038e99a244304a2a93d63157"} Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.063874 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0f26-account-create-update-lpqwm" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.063868 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0f26-account-create-update-lpqwm" event={"ID":"b6598bbb-908a-4758-91c0-e72d8a9d4da7","Type":"ContainerDied","Data":"89f334c8421fc5abcbc745bc886a95caba249cca7fa837ceb0bcc7d085929c36"} Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.064004 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f334c8421fc5abcbc745bc886a95caba249cca7fa837ceb0bcc7d085929c36" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.065864 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-glbp5" event={"ID":"fc12f2df-b90e-4bb9-a255-5e5353ed1dd5","Type":"ContainerDied","Data":"dd41ef4387d9879f65848cbee3b3cc3c9a873ff37928538311b2e7b99e1b1b34"} Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.065894 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd41ef4387d9879f65848cbee3b3cc3c9a873ff37928538311b2e7b99e1b1b34" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.065935 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-glbp5" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.068970 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e73f-account-create-update-kb58x" event={"ID":"9db8c9f9-5fba-4647-841a-71f4bc24f438","Type":"ContainerDied","Data":"74d67903f24bc88e0fa87c5864a6733da1a6f21677c910e97046441307ae5cc0"} Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.069009 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74d67903f24bc88e0fa87c5864a6733da1a6f21677c910e97046441307ae5cc0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.069046 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e73f-account-create-update-kb58x" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.080519 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a0b03b5-597a-4c59-9784-218e9f9442d1","Type":"ContainerStarted","Data":"51227d40f4a0862b5dcdfe6a7e6b89cf15a6b953d114ac8482d8a9bc74094e1a"} Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.096875 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=18.537593780999998 podStartE2EDuration="1m10.096857327s" podCreationTimestamp="2026-01-26 15:53:58 +0000 UTC" firstStartedPulling="2026-01-26 15:54:15.675856935 +0000 UTC m=+1230.812874170" lastFinishedPulling="2026-01-26 15:55:07.235120481 +0000 UTC m=+1282.372137716" observedRunningTime="2026-01-26 15:55:08.0886835 +0000 UTC m=+1283.225700735" watchObservedRunningTime="2026-01-26 15:55:08.096857327 +0000 UTC m=+1283.233874562" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.136058 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=34.843957021 podStartE2EDuration="1m6.136036417s" podCreationTimestamp="2026-01-26 15:54:02 +0000 UTC" firstStartedPulling="2026-01-26 15:54:16.048467118 +0000 UTC m=+1231.185484353" lastFinishedPulling="2026-01-26 15:54:47.340546514 +0000 UTC m=+1262.477563749" observedRunningTime="2026-01-26 15:55:08.122701656 +0000 UTC m=+1283.259718901" watchObservedRunningTime="2026-01-26 15:55:08.136036417 +0000 UTC m=+1283.273053672" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.353426 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-vbxxx"] Jan 26 15:55:08 crc kubenswrapper[4713]: E0126 15:55:08.353833 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e809e90-de29-4ad5-ad0f-8dc49a202b3f" containerName="mariadb-database-create" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.353860 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e809e90-de29-4ad5-ad0f-8dc49a202b3f" containerName="mariadb-database-create" Jan 26 15:55:08 crc kubenswrapper[4713]: E0126 15:55:08.353875 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6598bbb-908a-4758-91c0-e72d8a9d4da7" containerName="mariadb-account-create-update" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.353881 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6598bbb-908a-4758-91c0-e72d8a9d4da7" containerName="mariadb-account-create-update" Jan 26 15:55:08 crc kubenswrapper[4713]: E0126 15:55:08.353899 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc12f2df-b90e-4bb9-a255-5e5353ed1dd5" containerName="mariadb-database-create" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.353905 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc12f2df-b90e-4bb9-a255-5e5353ed1dd5" containerName="mariadb-database-create" Jan 26 15:55:08 crc kubenswrapper[4713]: E0126 15:55:08.353912 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db8c9f9-5fba-4647-841a-71f4bc24f438" containerName="mariadb-account-create-update" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.353918 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db8c9f9-5fba-4647-841a-71f4bc24f438" containerName="mariadb-account-create-update" Jan 26 15:55:08 crc kubenswrapper[4713]: E0126 15:55:08.353942 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24468eba-9d4f-446e-ac2d-39c4855686ff" containerName="mariadb-database-create" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.353948 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="24468eba-9d4f-446e-ac2d-39c4855686ff" containerName="mariadb-database-create" Jan 26 15:55:08 crc kubenswrapper[4713]: E0126 15:55:08.353955 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c39f53-9957-4f6a-912c-1c2217af11f1" containerName="mariadb-account-create-update" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.353960 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c39f53-9957-4f6a-912c-1c2217af11f1" containerName="mariadb-account-create-update" Jan 26 15:55:08 crc kubenswrapper[4713]: E0126 15:55:08.353971 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125bdff8-6eff-4f59-9cc4-c986c5771aa0" containerName="swift-ring-rebalance" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.353978 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="125bdff8-6eff-4f59-9cc4-c986c5771aa0" containerName="swift-ring-rebalance" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.354171 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6598bbb-908a-4758-91c0-e72d8a9d4da7" containerName="mariadb-account-create-update" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.354184 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e809e90-de29-4ad5-ad0f-8dc49a202b3f" containerName="mariadb-database-create" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.354196 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="125bdff8-6eff-4f59-9cc4-c986c5771aa0" containerName="swift-ring-rebalance" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.354222 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c39f53-9957-4f6a-912c-1c2217af11f1" containerName="mariadb-account-create-update" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.354238 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="9db8c9f9-5fba-4647-841a-71f4bc24f438" containerName="mariadb-account-create-update" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.354247 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc12f2df-b90e-4bb9-a255-5e5353ed1dd5" containerName="mariadb-database-create" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.354258 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="24468eba-9d4f-446e-ac2d-39c4855686ff" containerName="mariadb-database-create" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.355221 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.363096 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-vbxxx"] Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.363478 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.391932 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-ht5fq"] Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.393022 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.416504 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x62qs\" (UniqueName: \"kubernetes.io/projected/8fcf6581-6532-4f68-9a54-01d32dd012cc-kube-api-access-x62qs\") pod \"dnsmasq-dns-74f6f696b9-vbxxx\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.416584 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-vbxxx\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.416648 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-vbxxx\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.416695 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-config\") pod \"dnsmasq-dns-74f6f696b9-vbxxx\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.416943 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.479247 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ht5fq"] Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.519542 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r47cs\" (UniqueName: \"kubernetes.io/projected/499db69b-0e82-43b8-99e0-262258615861-kube-api-access-r47cs\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.519633 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x62qs\" (UniqueName: \"kubernetes.io/projected/8fcf6581-6532-4f68-9a54-01d32dd012cc-kube-api-access-x62qs\") pod \"dnsmasq-dns-74f6f696b9-vbxxx\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.519683 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/499db69b-0e82-43b8-99e0-262258615861-combined-ca-bundle\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.519715 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-vbxxx\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.519743 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/499db69b-0e82-43b8-99e0-262258615861-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.519811 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-vbxxx\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.519865 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-config\") pod \"dnsmasq-dns-74f6f696b9-vbxxx\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.519902 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/499db69b-0e82-43b8-99e0-262258615861-ovs-rundir\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.519950 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/499db69b-0e82-43b8-99e0-262258615861-ovn-rundir\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.519997 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/499db69b-0e82-43b8-99e0-262258615861-config\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.524859 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-config\") pod \"dnsmasq-dns-74f6f696b9-vbxxx\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.525785 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-vbxxx\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.542000 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-vbxxx\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.573666 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x62qs\" (UniqueName: \"kubernetes.io/projected/8fcf6581-6532-4f68-9a54-01d32dd012cc-kube-api-access-x62qs\") pod \"dnsmasq-dns-74f6f696b9-vbxxx\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.623479 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/499db69b-0e82-43b8-99e0-262258615861-combined-ca-bundle\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.623659 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/499db69b-0e82-43b8-99e0-262258615861-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.624002 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/499db69b-0e82-43b8-99e0-262258615861-ovs-rundir\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.624282 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/499db69b-0e82-43b8-99e0-262258615861-ovn-rundir\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.624467 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/499db69b-0e82-43b8-99e0-262258615861-config\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.624570 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r47cs\" (UniqueName: \"kubernetes.io/projected/499db69b-0e82-43b8-99e0-262258615861-kube-api-access-r47cs\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.626270 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/499db69b-0e82-43b8-99e0-262258615861-ovn-rundir\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.626354 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/499db69b-0e82-43b8-99e0-262258615861-ovs-rundir\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.629582 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/499db69b-0e82-43b8-99e0-262258615861-config\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.634187 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-vbxxx"] Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.635773 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/499db69b-0e82-43b8-99e0-262258615861-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.636450 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.638022 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/499db69b-0e82-43b8-99e0-262258615861-combined-ca-bundle\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.654378 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r47cs\" (UniqueName: \"kubernetes.io/projected/499db69b-0e82-43b8-99e0-262258615861-kube-api-access-r47cs\") pod \"ovn-controller-metrics-ht5fq\" (UID: \"499db69b-0e82-43b8-99e0-262258615861\") " pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.691262 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-dmh4f"] Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.698594 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.702278 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.743702 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-dmh4f"] Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.754800 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.760747 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ht5fq" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.762338 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.783229 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-z657q" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.783636 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.783794 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.796016 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.833982 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-dmh4f\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.834037 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6820209-510a-4346-b86d-006535127cc9-scripts\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.834075 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6820209-510a-4346-b86d-006535127cc9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.834107 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf25n\" (UniqueName: \"kubernetes.io/projected/21c37845-d3f7-4a91-9dc5-e0f8967b5682-kube-api-access-lf25n\") pod \"dnsmasq-dns-698758b865-dmh4f\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.834137 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-config\") pod \"dnsmasq-dns-698758b865-dmh4f\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.834256 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b96pt\" (UniqueName: \"kubernetes.io/projected/e6820209-510a-4346-b86d-006535127cc9-kube-api-access-b96pt\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.834350 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-dns-svc\") pod \"dnsmasq-dns-698758b865-dmh4f\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.834425 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6820209-510a-4346-b86d-006535127cc9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.834487 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e6820209-510a-4346-b86d-006535127cc9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.834538 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6820209-510a-4346-b86d-006535127cc9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.834566 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-dmh4f\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.834790 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6820209-510a-4346-b86d-006535127cc9-config\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.847385 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.936125 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf25n\" (UniqueName: \"kubernetes.io/projected/21c37845-d3f7-4a91-9dc5-e0f8967b5682-kube-api-access-lf25n\") pod \"dnsmasq-dns-698758b865-dmh4f\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.936171 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-config\") pod \"dnsmasq-dns-698758b865-dmh4f\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.936247 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b96pt\" (UniqueName: \"kubernetes.io/projected/e6820209-510a-4346-b86d-006535127cc9-kube-api-access-b96pt\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.936267 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-dns-svc\") pod \"dnsmasq-dns-698758b865-dmh4f\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.936296 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6820209-510a-4346-b86d-006535127cc9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.936330 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e6820209-510a-4346-b86d-006535127cc9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.936353 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6820209-510a-4346-b86d-006535127cc9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.936386 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-dmh4f\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.936429 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6820209-510a-4346-b86d-006535127cc9-config\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.936505 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-dmh4f\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.936530 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6820209-510a-4346-b86d-006535127cc9-scripts\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.936554 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6820209-510a-4346-b86d-006535127cc9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.937097 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-dns-svc\") pod \"dnsmasq-dns-698758b865-dmh4f\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.937498 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e6820209-510a-4346-b86d-006535127cc9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.937920 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-config\") pod \"dnsmasq-dns-698758b865-dmh4f\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.938766 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-dmh4f\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.939251 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-dmh4f\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.939814 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6820209-510a-4346-b86d-006535127cc9-config\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.940423 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6820209-510a-4346-b86d-006535127cc9-scripts\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.943997 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6820209-510a-4346-b86d-006535127cc9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.945718 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6820209-510a-4346-b86d-006535127cc9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.952774 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6820209-510a-4346-b86d-006535127cc9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.956761 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf25n\" (UniqueName: \"kubernetes.io/projected/21c37845-d3f7-4a91-9dc5-e0f8967b5682-kube-api-access-lf25n\") pod \"dnsmasq-dns-698758b865-dmh4f\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:08 crc kubenswrapper[4713]: I0126 15:55:08.963824 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b96pt\" (UniqueName: \"kubernetes.io/projected/e6820209-510a-4346-b86d-006535127cc9-kube-api-access-b96pt\") pod \"ovn-northd-0\" (UID: \"e6820209-510a-4346-b86d-006535127cc9\") " pod="openstack/ovn-northd-0" Jan 26 15:55:09 crc kubenswrapper[4713]: I0126 15:55:09.146458 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:09 crc kubenswrapper[4713]: I0126 15:55:09.171240 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 15:55:09 crc kubenswrapper[4713]: I0126 15:55:09.347188 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-vbxxx"] Jan 26 15:55:09 crc kubenswrapper[4713]: I0126 15:55:09.420187 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ht5fq"] Jan 26 15:55:09 crc kubenswrapper[4713]: W0126 15:55:09.432034 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod499db69b_0e82_43b8_99e0_262258615861.slice/crio-3708c43d7ee47c5ca1402a380a76adf40e343b856dead74d2e63146e78d3db8a WatchSource:0}: Error finding container 3708c43d7ee47c5ca1402a380a76adf40e343b856dead74d2e63146e78d3db8a: Status 404 returned error can't find the container with id 3708c43d7ee47c5ca1402a380a76adf40e343b856dead74d2e63146e78d3db8a Jan 26 15:55:09 crc kubenswrapper[4713]: I0126 15:55:09.638401 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-dmh4f"] Jan 26 15:55:09 crc kubenswrapper[4713]: W0126 15:55:09.650728 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21c37845_d3f7_4a91_9dc5_e0f8967b5682.slice/crio-1e2f6e04792eca48e9437147245b09b944dc147cd39faca99b5152105e02569b WatchSource:0}: Error finding container 1e2f6e04792eca48e9437147245b09b944dc147cd39faca99b5152105e02569b: Status 404 returned error can't find the container with id 1e2f6e04792eca48e9437147245b09b944dc147cd39faca99b5152105e02569b Jan 26 15:55:09 crc kubenswrapper[4713]: I0126 15:55:09.738216 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 15:55:09 crc kubenswrapper[4713]: I0126 15:55:09.926684 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6fwxk"] Jan 26 15:55:09 crc kubenswrapper[4713]: I0126 15:55:09.928292 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6fwxk" Jan 26 15:55:09 crc kubenswrapper[4713]: I0126 15:55:09.930110 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 15:55:09 crc kubenswrapper[4713]: I0126 15:55:09.953332 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6fwxk"] Jan 26 15:55:09 crc kubenswrapper[4713]: I0126 15:55:09.956591 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.070509 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq4gg\" (UniqueName: \"kubernetes.io/projected/4a4a6995-a67d-4640-b110-32227664c658-kube-api-access-pq4gg\") pod \"root-account-create-update-6fwxk\" (UID: \"4a4a6995-a67d-4640-b110-32227664c658\") " pod="openstack/root-account-create-update-6fwxk" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.070756 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a4a6995-a67d-4640-b110-32227664c658-operator-scripts\") pod \"root-account-create-update-6fwxk\" (UID: \"4a4a6995-a67d-4640-b110-32227664c658\") " pod="openstack/root-account-create-update-6fwxk" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.102437 4713 generic.go:334] "Generic (PLEG): container finished" podID="21c37845-d3f7-4a91-9dc5-e0f8967b5682" containerID="c44bc47d8a5c9e78e2536a5d7972e14bcfd0de123f6a57ad33e6f33c5a9a7e6f" exitCode=0 Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.102519 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-dmh4f" event={"ID":"21c37845-d3f7-4a91-9dc5-e0f8967b5682","Type":"ContainerDied","Data":"c44bc47d8a5c9e78e2536a5d7972e14bcfd0de123f6a57ad33e6f33c5a9a7e6f"} Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.102552 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-dmh4f" event={"ID":"21c37845-d3f7-4a91-9dc5-e0f8967b5682","Type":"ContainerStarted","Data":"1e2f6e04792eca48e9437147245b09b944dc147cd39faca99b5152105e02569b"} Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.110223 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ht5fq" event={"ID":"499db69b-0e82-43b8-99e0-262258615861","Type":"ContainerStarted","Data":"11862a046d4accd1c2f03df8def9149d3a7610f082d12368fd66af380ab048b1"} Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.110266 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ht5fq" event={"ID":"499db69b-0e82-43b8-99e0-262258615861","Type":"ContainerStarted","Data":"3708c43d7ee47c5ca1402a380a76adf40e343b856dead74d2e63146e78d3db8a"} Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.127834 4713 generic.go:334] "Generic (PLEG): container finished" podID="8fcf6581-6532-4f68-9a54-01d32dd012cc" containerID="b1fcf20039120100a6e06db5286b18272b355b981e3d48e52915ac343511bc16" exitCode=0 Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.127948 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" event={"ID":"8fcf6581-6532-4f68-9a54-01d32dd012cc","Type":"ContainerDied","Data":"b1fcf20039120100a6e06db5286b18272b355b981e3d48e52915ac343511bc16"} Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.127986 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" event={"ID":"8fcf6581-6532-4f68-9a54-01d32dd012cc","Type":"ContainerStarted","Data":"73e8b9df03c1f2d8f1236b1a0ff20d641c7f970e7c33bd139675c99bec3a09cd"} Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.130548 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e6820209-510a-4346-b86d-006535127cc9","Type":"ContainerStarted","Data":"5f2ae262743096d5c257c8c9f0957e69a008529eb68825dbc92b5b20688487c6"} Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.174522 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq4gg\" (UniqueName: \"kubernetes.io/projected/4a4a6995-a67d-4640-b110-32227664c658-kube-api-access-pq4gg\") pod \"root-account-create-update-6fwxk\" (UID: \"4a4a6995-a67d-4640-b110-32227664c658\") " pod="openstack/root-account-create-update-6fwxk" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.181961 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a4a6995-a67d-4640-b110-32227664c658-operator-scripts\") pod \"root-account-create-update-6fwxk\" (UID: \"4a4a6995-a67d-4640-b110-32227664c658\") " pod="openstack/root-account-create-update-6fwxk" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.186034 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a4a6995-a67d-4640-b110-32227664c658-operator-scripts\") pod \"root-account-create-update-6fwxk\" (UID: \"4a4a6995-a67d-4640-b110-32227664c658\") " pod="openstack/root-account-create-update-6fwxk" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.213517 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq4gg\" (UniqueName: \"kubernetes.io/projected/4a4a6995-a67d-4640-b110-32227664c658-kube-api-access-pq4gg\") pod \"root-account-create-update-6fwxk\" (UID: \"4a4a6995-a67d-4640-b110-32227664c658\") " pod="openstack/root-account-create-update-6fwxk" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.237975 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-ht5fq" podStartSLOduration=2.237926365 podStartE2EDuration="2.237926365s" podCreationTimestamp="2026-01-26 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:10.195743532 +0000 UTC m=+1285.332760777" watchObservedRunningTime="2026-01-26 15:55:10.237926365 +0000 UTC m=+1285.374943610" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.266619 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.319743 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6fwxk" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.391275 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-87vcz"] Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.392761 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-87vcz" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.437426 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-87vcz"] Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.489700 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4llhz\" (UniqueName: \"kubernetes.io/projected/7e78b6f7-7c44-4bb4-b9a9-b763d463466f-kube-api-access-4llhz\") pod \"cinder-db-create-87vcz\" (UID: \"7e78b6f7-7c44-4bb4-b9a9-b763d463466f\") " pod="openstack/cinder-db-create-87vcz" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.489959 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e78b6f7-7c44-4bb4-b9a9-b763d463466f-operator-scripts\") pod \"cinder-db-create-87vcz\" (UID: \"7e78b6f7-7c44-4bb4-b9a9-b763d463466f\") " pod="openstack/cinder-db-create-87vcz" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.561342 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-6826v"] Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.562579 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-6826v" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.575124 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-6826v"] Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.593148 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4llhz\" (UniqueName: \"kubernetes.io/projected/7e78b6f7-7c44-4bb4-b9a9-b763d463466f-kube-api-access-4llhz\") pod \"cinder-db-create-87vcz\" (UID: \"7e78b6f7-7c44-4bb4-b9a9-b763d463466f\") " pod="openstack/cinder-db-create-87vcz" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.593221 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e78b6f7-7c44-4bb4-b9a9-b763d463466f-operator-scripts\") pod \"cinder-db-create-87vcz\" (UID: \"7e78b6f7-7c44-4bb4-b9a9-b763d463466f\") " pod="openstack/cinder-db-create-87vcz" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.594175 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e78b6f7-7c44-4bb4-b9a9-b763d463466f-operator-scripts\") pod \"cinder-db-create-87vcz\" (UID: \"7e78b6f7-7c44-4bb4-b9a9-b763d463466f\") " pod="openstack/cinder-db-create-87vcz" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.631304 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4llhz\" (UniqueName: \"kubernetes.io/projected/7e78b6f7-7c44-4bb4-b9a9-b763d463466f-kube-api-access-4llhz\") pod \"cinder-db-create-87vcz\" (UID: \"7e78b6f7-7c44-4bb4-b9a9-b763d463466f\") " pod="openstack/cinder-db-create-87vcz" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.669045 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-dvtkq"] Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.685554 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-bd27-account-create-update-584sq"] Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.686912 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-bd27-account-create-update-584sq" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.687201 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dvtkq" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.692479 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.694451 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/864c7381-a1b5-4e9c-986a-9c7368508fd0-operator-scripts\") pod \"cloudkitty-db-create-6826v\" (UID: \"864c7381-a1b5-4e9c-986a-9c7368508fd0\") " pod="openstack/cloudkitty-db-create-6826v" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.694490 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swwp4\" (UniqueName: \"kubernetes.io/projected/864c7381-a1b5-4e9c-986a-9c7368508fd0-kube-api-access-swwp4\") pod \"cloudkitty-db-create-6826v\" (UID: \"864c7381-a1b5-4e9c-986a-9c7368508fd0\") " pod="openstack/cloudkitty-db-create-6826v" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.696941 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.705132 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.721306 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-bd27-account-create-update-584sq"] Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.737267 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-dvtkq"] Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.772268 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-90d6-account-create-update-7jgpf"] Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.779343 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-90d6-account-create-update-7jgpf" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.793702 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.798580 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkzc9\" (UniqueName: \"kubernetes.io/projected/d2a98b39-d817-42a6-914f-529499cfc4bc-kube-api-access-tkzc9\") pod \"cloudkitty-bd27-account-create-update-584sq\" (UID: \"d2a98b39-d817-42a6-914f-529499cfc4bc\") " pod="openstack/cloudkitty-bd27-account-create-update-584sq" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.798685 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/864c7381-a1b5-4e9c-986a-9c7368508fd0-operator-scripts\") pod \"cloudkitty-db-create-6826v\" (UID: \"864c7381-a1b5-4e9c-986a-9c7368508fd0\") " pod="openstack/cloudkitty-db-create-6826v" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.798719 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swwp4\" (UniqueName: \"kubernetes.io/projected/864c7381-a1b5-4e9c-986a-9c7368508fd0-kube-api-access-swwp4\") pod \"cloudkitty-db-create-6826v\" (UID: \"864c7381-a1b5-4e9c-986a-9c7368508fd0\") " pod="openstack/cloudkitty-db-create-6826v" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.798833 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dmgx\" (UniqueName: \"kubernetes.io/projected/7dc642ec-46c1-47a0-a022-3259e2d47d42-kube-api-access-8dmgx\") pod \"barbican-db-create-dvtkq\" (UID: \"7dc642ec-46c1-47a0-a022-3259e2d47d42\") " pod="openstack/barbican-db-create-dvtkq" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.798870 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2a98b39-d817-42a6-914f-529499cfc4bc-operator-scripts\") pod \"cloudkitty-bd27-account-create-update-584sq\" (UID: \"d2a98b39-d817-42a6-914f-529499cfc4bc\") " pod="openstack/cloudkitty-bd27-account-create-update-584sq" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.798894 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dc642ec-46c1-47a0-a022-3259e2d47d42-operator-scripts\") pod \"barbican-db-create-dvtkq\" (UID: \"7dc642ec-46c1-47a0-a022-3259e2d47d42\") " pod="openstack/barbican-db-create-dvtkq" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.801172 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/864c7381-a1b5-4e9c-986a-9c7368508fd0-operator-scripts\") pod \"cloudkitty-db-create-6826v\" (UID: \"864c7381-a1b5-4e9c-986a-9c7368508fd0\") " pod="openstack/cloudkitty-db-create-6826v" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.802913 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-87vcz" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.849575 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-90d6-account-create-update-7jgpf"] Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.886820 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e0a5-account-create-update-rwzs7"] Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.887976 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swwp4\" (UniqueName: \"kubernetes.io/projected/864c7381-a1b5-4e9c-986a-9c7368508fd0-kube-api-access-swwp4\") pod \"cloudkitty-db-create-6826v\" (UID: \"864c7381-a1b5-4e9c-986a-9c7368508fd0\") " pod="openstack/cloudkitty-db-create-6826v" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.888318 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e0a5-account-create-update-rwzs7" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.891451 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.893001 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e0a5-account-create-update-rwzs7"] Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.907561 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dmgx\" (UniqueName: \"kubernetes.io/projected/7dc642ec-46c1-47a0-a022-3259e2d47d42-kube-api-access-8dmgx\") pod \"barbican-db-create-dvtkq\" (UID: \"7dc642ec-46c1-47a0-a022-3259e2d47d42\") " pod="openstack/barbican-db-create-dvtkq" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.907618 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2a98b39-d817-42a6-914f-529499cfc4bc-operator-scripts\") pod \"cloudkitty-bd27-account-create-update-584sq\" (UID: \"d2a98b39-d817-42a6-914f-529499cfc4bc\") " pod="openstack/cloudkitty-bd27-account-create-update-584sq" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.907645 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dc642ec-46c1-47a0-a022-3259e2d47d42-operator-scripts\") pod \"barbican-db-create-dvtkq\" (UID: \"7dc642ec-46c1-47a0-a022-3259e2d47d42\") " pod="openstack/barbican-db-create-dvtkq" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.907727 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92ae0895-a996-466d-800c-14494b72c006-operator-scripts\") pod \"barbican-90d6-account-create-update-7jgpf\" (UID: \"92ae0895-a996-466d-800c-14494b72c006\") " pod="openstack/barbican-90d6-account-create-update-7jgpf" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.907874 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psr87\" (UniqueName: \"kubernetes.io/projected/92ae0895-a996-466d-800c-14494b72c006-kube-api-access-psr87\") pod \"barbican-90d6-account-create-update-7jgpf\" (UID: \"92ae0895-a996-466d-800c-14494b72c006\") " pod="openstack/barbican-90d6-account-create-update-7jgpf" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.907959 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkzc9\" (UniqueName: \"kubernetes.io/projected/d2a98b39-d817-42a6-914f-529499cfc4bc-kube-api-access-tkzc9\") pod \"cloudkitty-bd27-account-create-update-584sq\" (UID: \"d2a98b39-d817-42a6-914f-529499cfc4bc\") " pod="openstack/cloudkitty-bd27-account-create-update-584sq" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.910772 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2a98b39-d817-42a6-914f-529499cfc4bc-operator-scripts\") pod \"cloudkitty-bd27-account-create-update-584sq\" (UID: \"d2a98b39-d817-42a6-914f-529499cfc4bc\") " pod="openstack/cloudkitty-bd27-account-create-update-584sq" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.911252 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dc642ec-46c1-47a0-a022-3259e2d47d42-operator-scripts\") pod \"barbican-db-create-dvtkq\" (UID: \"7dc642ec-46c1-47a0-a022-3259e2d47d42\") " pod="openstack/barbican-db-create-dvtkq" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.917162 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-6826v" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.934566 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkzc9\" (UniqueName: \"kubernetes.io/projected/d2a98b39-d817-42a6-914f-529499cfc4bc-kube-api-access-tkzc9\") pod \"cloudkitty-bd27-account-create-update-584sq\" (UID: \"d2a98b39-d817-42a6-914f-529499cfc4bc\") " pod="openstack/cloudkitty-bd27-account-create-update-584sq" Jan 26 15:55:10 crc kubenswrapper[4713]: I0126 15:55:10.942085 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dmgx\" (UniqueName: \"kubernetes.io/projected/7dc642ec-46c1-47a0-a022-3259e2d47d42-kube-api-access-8dmgx\") pod \"barbican-db-create-dvtkq\" (UID: \"7dc642ec-46c1-47a0-a022-3259e2d47d42\") " pod="openstack/barbican-db-create-dvtkq" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.009352 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hddbl\" (UniqueName: \"kubernetes.io/projected/5754eedb-9e1a-4f09-a0cd-9e16659b5708-kube-api-access-hddbl\") pod \"cinder-e0a5-account-create-update-rwzs7\" (UID: \"5754eedb-9e1a-4f09-a0cd-9e16659b5708\") " pod="openstack/cinder-e0a5-account-create-update-rwzs7" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.009594 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92ae0895-a996-466d-800c-14494b72c006-operator-scripts\") pod \"barbican-90d6-account-create-update-7jgpf\" (UID: \"92ae0895-a996-466d-800c-14494b72c006\") " pod="openstack/barbican-90d6-account-create-update-7jgpf" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.009673 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psr87\" (UniqueName: \"kubernetes.io/projected/92ae0895-a996-466d-800c-14494b72c006-kube-api-access-psr87\") pod \"barbican-90d6-account-create-update-7jgpf\" (UID: \"92ae0895-a996-466d-800c-14494b72c006\") " pod="openstack/barbican-90d6-account-create-update-7jgpf" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.009724 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5754eedb-9e1a-4f09-a0cd-9e16659b5708-operator-scripts\") pod \"cinder-e0a5-account-create-update-rwzs7\" (UID: \"5754eedb-9e1a-4f09-a0cd-9e16659b5708\") " pod="openstack/cinder-e0a5-account-create-update-rwzs7" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.010666 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92ae0895-a996-466d-800c-14494b72c006-operator-scripts\") pod \"barbican-90d6-account-create-update-7jgpf\" (UID: \"92ae0895-a996-466d-800c-14494b72c006\") " pod="openstack/barbican-90d6-account-create-update-7jgpf" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.041154 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-bd27-account-create-update-584sq" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.051962 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.057759 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dvtkq" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.072388 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psr87\" (UniqueName: \"kubernetes.io/projected/92ae0895-a996-466d-800c-14494b72c006-kube-api-access-psr87\") pod \"barbican-90d6-account-create-update-7jgpf\" (UID: \"92ae0895-a996-466d-800c-14494b72c006\") " pod="openstack/barbican-90d6-account-create-update-7jgpf" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.078358 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-5qpqg"] Jan 26 15:55:11 crc kubenswrapper[4713]: E0126 15:55:11.078803 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fcf6581-6532-4f68-9a54-01d32dd012cc" containerName="init" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.078822 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fcf6581-6532-4f68-9a54-01d32dd012cc" containerName="init" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.079022 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fcf6581-6532-4f68-9a54-01d32dd012cc" containerName="init" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.081075 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5qpqg" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.091917 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5qpqg"] Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.114476 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5754eedb-9e1a-4f09-a0cd-9e16659b5708-operator-scripts\") pod \"cinder-e0a5-account-create-update-rwzs7\" (UID: \"5754eedb-9e1a-4f09-a0cd-9e16659b5708\") " pod="openstack/cinder-e0a5-account-create-update-rwzs7" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.115076 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hddbl\" (UniqueName: \"kubernetes.io/projected/5754eedb-9e1a-4f09-a0cd-9e16659b5708-kube-api-access-hddbl\") pod \"cinder-e0a5-account-create-update-rwzs7\" (UID: \"5754eedb-9e1a-4f09-a0cd-9e16659b5708\") " pod="openstack/cinder-e0a5-account-create-update-rwzs7" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.115792 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5754eedb-9e1a-4f09-a0cd-9e16659b5708-operator-scripts\") pod \"cinder-e0a5-account-create-update-rwzs7\" (UID: \"5754eedb-9e1a-4f09-a0cd-9e16659b5708\") " pod="openstack/cinder-e0a5-account-create-update-rwzs7" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.131671 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-90d6-account-create-update-7jgpf" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.160493 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hddbl\" (UniqueName: \"kubernetes.io/projected/5754eedb-9e1a-4f09-a0cd-9e16659b5708-kube-api-access-hddbl\") pod \"cinder-e0a5-account-create-update-rwzs7\" (UID: \"5754eedb-9e1a-4f09-a0cd-9e16659b5708\") " pod="openstack/cinder-e0a5-account-create-update-rwzs7" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.160601 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-dmh4f" event={"ID":"21c37845-d3f7-4a91-9dc5-e0f8967b5682","Type":"ContainerStarted","Data":"7e96a4a38d1e12bf4634a02bea712f9005834e6cf2a79cf164f53f2f5ad49ac3"} Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.160698 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.163006 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.163055 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" event={"ID":"8fcf6581-6532-4f68-9a54-01d32dd012cc","Type":"ContainerDied","Data":"73e8b9df03c1f2d8f1236b1a0ff20d641c7f970e7c33bd139675c99bec3a09cd"} Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.163088 4713 scope.go:117] "RemoveContainer" containerID="b1fcf20039120100a6e06db5286b18272b355b981e3d48e52915ac343511bc16" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.168512 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.189163 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-dmh4f" podStartSLOduration=3.18914316 podStartE2EDuration="3.18914316s" podCreationTimestamp="2026-01-26 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:11.185654803 +0000 UTC m=+1286.322672038" watchObservedRunningTime="2026-01-26 15:55:11.18914316 +0000 UTC m=+1286.326160385" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.216254 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-ovsdbserver-nb\") pod \"8fcf6581-6532-4f68-9a54-01d32dd012cc\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.216323 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-dns-svc\") pod \"8fcf6581-6532-4f68-9a54-01d32dd012cc\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.216858 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x62qs\" (UniqueName: \"kubernetes.io/projected/8fcf6581-6532-4f68-9a54-01d32dd012cc-kube-api-access-x62qs\") pod \"8fcf6581-6532-4f68-9a54-01d32dd012cc\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.217416 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-config\") pod \"8fcf6581-6532-4f68-9a54-01d32dd012cc\" (UID: \"8fcf6581-6532-4f68-9a54-01d32dd012cc\") " Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.217652 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbs88\" (UniqueName: \"kubernetes.io/projected/0880b6cd-9c82-432d-8ca2-e536c3f9a68f-kube-api-access-gbs88\") pod \"neutron-db-create-5qpqg\" (UID: \"0880b6cd-9c82-432d-8ca2-e536c3f9a68f\") " pod="openstack/neutron-db-create-5qpqg" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.217710 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0880b6cd-9c82-432d-8ca2-e536c3f9a68f-operator-scripts\") pod \"neutron-db-create-5qpqg\" (UID: \"0880b6cd-9c82-432d-8ca2-e536c3f9a68f\") " pod="openstack/neutron-db-create-5qpqg" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.233631 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fcf6581-6532-4f68-9a54-01d32dd012cc-kube-api-access-x62qs" (OuterVolumeSpecName: "kube-api-access-x62qs") pod "8fcf6581-6532-4f68-9a54-01d32dd012cc" (UID: "8fcf6581-6532-4f68-9a54-01d32dd012cc"). InnerVolumeSpecName "kube-api-access-x62qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.259231 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8fcf6581-6532-4f68-9a54-01d32dd012cc" (UID: "8fcf6581-6532-4f68-9a54-01d32dd012cc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.271215 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8fcf6581-6532-4f68-9a54-01d32dd012cc" (UID: "8fcf6581-6532-4f68-9a54-01d32dd012cc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.273820 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-config" (OuterVolumeSpecName: "config") pod "8fcf6581-6532-4f68-9a54-01d32dd012cc" (UID: "8fcf6581-6532-4f68-9a54-01d32dd012cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.310594 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-afae-account-create-update-f5xks"] Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.312349 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-afae-account-create-update-f5xks" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.315437 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.319733 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0880b6cd-9c82-432d-8ca2-e536c3f9a68f-operator-scripts\") pod \"neutron-db-create-5qpqg\" (UID: \"0880b6cd-9c82-432d-8ca2-e536c3f9a68f\") " pod="openstack/neutron-db-create-5qpqg" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.320027 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbs88\" (UniqueName: \"kubernetes.io/projected/0880b6cd-9c82-432d-8ca2-e536c3f9a68f-kube-api-access-gbs88\") pod \"neutron-db-create-5qpqg\" (UID: \"0880b6cd-9c82-432d-8ca2-e536c3f9a68f\") " pod="openstack/neutron-db-create-5qpqg" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.320138 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x62qs\" (UniqueName: \"kubernetes.io/projected/8fcf6581-6532-4f68-9a54-01d32dd012cc-kube-api-access-x62qs\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.320150 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.320159 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.320168 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fcf6581-6532-4f68-9a54-01d32dd012cc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.321592 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0880b6cd-9c82-432d-8ca2-e536c3f9a68f-operator-scripts\") pod \"neutron-db-create-5qpqg\" (UID: \"0880b6cd-9c82-432d-8ca2-e536c3f9a68f\") " pod="openstack/neutron-db-create-5qpqg" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.350549 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbs88\" (UniqueName: \"kubernetes.io/projected/0880b6cd-9c82-432d-8ca2-e536c3f9a68f-kube-api-access-gbs88\") pod \"neutron-db-create-5qpqg\" (UID: \"0880b6cd-9c82-432d-8ca2-e536c3f9a68f\") " pod="openstack/neutron-db-create-5qpqg" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.369277 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-afae-account-create-update-f5xks"] Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.396147 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e0a5-account-create-update-rwzs7" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.422936 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12849c0-1ce5-4551-acf4-75f9fdc74fed-operator-scripts\") pod \"neutron-afae-account-create-update-f5xks\" (UID: \"b12849c0-1ce5-4551-acf4-75f9fdc74fed\") " pod="openstack/neutron-afae-account-create-update-f5xks" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.423035 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d645r\" (UniqueName: \"kubernetes.io/projected/b12849c0-1ce5-4551-acf4-75f9fdc74fed-kube-api-access-d645r\") pod \"neutron-afae-account-create-update-f5xks\" (UID: \"b12849c0-1ce5-4551-acf4-75f9fdc74fed\") " pod="openstack/neutron-afae-account-create-update-f5xks" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.437351 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5qpqg" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.526116 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12849c0-1ce5-4551-acf4-75f9fdc74fed-operator-scripts\") pod \"neutron-afae-account-create-update-f5xks\" (UID: \"b12849c0-1ce5-4551-acf4-75f9fdc74fed\") " pod="openstack/neutron-afae-account-create-update-f5xks" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.526177 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d645r\" (UniqueName: \"kubernetes.io/projected/b12849c0-1ce5-4551-acf4-75f9fdc74fed-kube-api-access-d645r\") pod \"neutron-afae-account-create-update-f5xks\" (UID: \"b12849c0-1ce5-4551-acf4-75f9fdc74fed\") " pod="openstack/neutron-afae-account-create-update-f5xks" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.527097 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12849c0-1ce5-4551-acf4-75f9fdc74fed-operator-scripts\") pod \"neutron-afae-account-create-update-f5xks\" (UID: \"b12849c0-1ce5-4551-acf4-75f9fdc74fed\") " pod="openstack/neutron-afae-account-create-update-f5xks" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.540525 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6fwxk"] Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.563000 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d645r\" (UniqueName: \"kubernetes.io/projected/b12849c0-1ce5-4551-acf4-75f9fdc74fed-kube-api-access-d645r\") pod \"neutron-afae-account-create-update-f5xks\" (UID: \"b12849c0-1ce5-4551-acf4-75f9fdc74fed\") " pod="openstack/neutron-afae-account-create-update-f5xks" Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.568801 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-87vcz"] Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.717450 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-6826v"] Jan 26 15:55:11 crc kubenswrapper[4713]: W0126 15:55:11.741743 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod864c7381_a1b5_4e9c_986a_9c7368508fd0.slice/crio-488a4d5940ca8f5d37fee0d9278583686d557eedb3320b2ebd897e74119020c6 WatchSource:0}: Error finding container 488a4d5940ca8f5d37fee0d9278583686d557eedb3320b2ebd897e74119020c6: Status 404 returned error can't find the container with id 488a4d5940ca8f5d37fee0d9278583686d557eedb3320b2ebd897e74119020c6 Jan 26 15:55:11 crc kubenswrapper[4713]: I0126 15:55:11.762732 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-afae-account-create-update-f5xks" Jan 26 15:55:12 crc kubenswrapper[4713]: I0126 15:55:12.188638 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-6826v" event={"ID":"864c7381-a1b5-4e9c-986a-9c7368508fd0","Type":"ContainerStarted","Data":"488a4d5940ca8f5d37fee0d9278583686d557eedb3320b2ebd897e74119020c6"} Jan 26 15:55:12 crc kubenswrapper[4713]: I0126 15:55:12.201053 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6fwxk" event={"ID":"4a4a6995-a67d-4640-b110-32227664c658","Type":"ContainerStarted","Data":"37ebe1445dbd0c3ae34fb630bc49151229259b7448e93afd1ae7874347eefd44"} Jan 26 15:55:12 crc kubenswrapper[4713]: I0126 15:55:12.227810 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-6fwxk" podStartSLOduration=3.227791287 podStartE2EDuration="3.227791287s" podCreationTimestamp="2026-01-26 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:12.222627733 +0000 UTC m=+1287.359644968" watchObservedRunningTime="2026-01-26 15:55:12.227791287 +0000 UTC m=+1287.364808522" Jan 26 15:55:12 crc kubenswrapper[4713]: I0126 15:55:12.238758 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-87vcz" event={"ID":"7e78b6f7-7c44-4bb4-b9a9-b763d463466f","Type":"ContainerStarted","Data":"a5f94fb25c7244924cd4631d206aea7ae19a8b8aa48ccdbedd317204ad1957fe"} Jan 26 15:55:12 crc kubenswrapper[4713]: I0126 15:55:12.349416 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-dvtkq"] Jan 26 15:55:12 crc kubenswrapper[4713]: I0126 15:55:12.411887 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-bd27-account-create-update-584sq"] Jan 26 15:55:12 crc kubenswrapper[4713]: I0126 15:55:12.838772 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-90d6-account-create-update-7jgpf"] Jan 26 15:55:12 crc kubenswrapper[4713]: I0126 15:55:12.870546 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e0a5-account-create-update-rwzs7"] Jan 26 15:55:12 crc kubenswrapper[4713]: I0126 15:55:12.904400 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5qpqg"] Jan 26 15:55:12 crc kubenswrapper[4713]: I0126 15:55:12.986892 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-afae-account-create-update-f5xks"] Jan 26 15:55:13 crc kubenswrapper[4713]: W0126 15:55:13.016563 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb12849c0_1ce5_4551_acf4_75f9fdc74fed.slice/crio-28eddeb49002ffe87fa081a8a1d6780408b16fbc487d1ac683dffab0a0d50528 WatchSource:0}: Error finding container 28eddeb49002ffe87fa081a8a1d6780408b16fbc487d1ac683dffab0a0d50528: Status 404 returned error can't find the container with id 28eddeb49002ffe87fa081a8a1d6780408b16fbc487d1ac683dffab0a0d50528 Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.271987 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-bd27-account-create-update-584sq" event={"ID":"d2a98b39-d817-42a6-914f-529499cfc4bc","Type":"ContainerStarted","Data":"18d58f573953e226423c3188aa1d007ab34640ced5883d5de3bd364ffe84b26b"} Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.272033 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-bd27-account-create-update-584sq" event={"ID":"d2a98b39-d817-42a6-914f-529499cfc4bc","Type":"ContainerStarted","Data":"56b4d0d22c1adf36c673f2cd52e8f7544a9e60203a45272423dc390636906835"} Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.280131 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5qpqg" event={"ID":"0880b6cd-9c82-432d-8ca2-e536c3f9a68f","Type":"ContainerStarted","Data":"3098dfb84cd7250880a6578e5604878282ed35877f0e2fa9c408b5ecdb505421"} Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.298792 4713 generic.go:334] "Generic (PLEG): container finished" podID="864c7381-a1b5-4e9c-986a-9c7368508fd0" containerID="9049dbb6073b4a02f2f242abdf7790a1e28f49d718b6da6067f920a91bd1f6dd" exitCode=0 Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.298971 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-6826v" event={"ID":"864c7381-a1b5-4e9c-986a-9c7368508fd0","Type":"ContainerDied","Data":"9049dbb6073b4a02f2f242abdf7790a1e28f49d718b6da6067f920a91bd1f6dd"} Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.299921 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-bd27-account-create-update-584sq" podStartSLOduration=3.299900274 podStartE2EDuration="3.299900274s" podCreationTimestamp="2026-01-26 15:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:13.298701501 +0000 UTC m=+1288.435718746" watchObservedRunningTime="2026-01-26 15:55:13.299900274 +0000 UTC m=+1288.436917509" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.320945 4713 generic.go:334] "Generic (PLEG): container finished" podID="4a4a6995-a67d-4640-b110-32227664c658" containerID="2d2694a0761ce2e6e1e2fe2588aeaf4de2a502634a8bce309350b7433867044e" exitCode=0 Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.321016 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6fwxk" event={"ID":"4a4a6995-a67d-4640-b110-32227664c658","Type":"ContainerDied","Data":"2d2694a0761ce2e6e1e2fe2588aeaf4de2a502634a8bce309350b7433867044e"} Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.324290 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-90d6-account-create-update-7jgpf" event={"ID":"92ae0895-a996-466d-800c-14494b72c006","Type":"ContainerStarted","Data":"216fcaaf8eea7907695d61560a81b00283911dd81a4f6110338464f3e36cf2ef"} Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.324329 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-90d6-account-create-update-7jgpf" event={"ID":"92ae0895-a996-466d-800c-14494b72c006","Type":"ContainerStarted","Data":"6fdb140d1bbf6be11e29f298e6af6479331ac7ef7459ed7b1a7461ea17a50c53"} Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.334832 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-5qpqg" podStartSLOduration=2.334811945 podStartE2EDuration="2.334811945s" podCreationTimestamp="2026-01-26 15:55:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:13.323578773 +0000 UTC m=+1288.460596038" watchObservedRunningTime="2026-01-26 15:55:13.334811945 +0000 UTC m=+1288.471829180" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.339106 4713 generic.go:334] "Generic (PLEG): container finished" podID="7dc642ec-46c1-47a0-a022-3259e2d47d42" containerID="8ef672a93d0746d48f47a20338f1e992ca8e2a57efdb609604bee25524c33d47" exitCode=0 Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.339181 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dvtkq" event={"ID":"7dc642ec-46c1-47a0-a022-3259e2d47d42","Type":"ContainerDied","Data":"8ef672a93d0746d48f47a20338f1e992ca8e2a57efdb609604bee25524c33d47"} Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.339206 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dvtkq" event={"ID":"7dc642ec-46c1-47a0-a022-3259e2d47d42","Type":"ContainerStarted","Data":"f2fd675b8543844b2c2542fc055f583d1f34a7a44ca10850c7031873a0e51b98"} Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.343558 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e0a5-account-create-update-rwzs7" event={"ID":"5754eedb-9e1a-4f09-a0cd-9e16659b5708","Type":"ContainerStarted","Data":"f54ae1c1c93573cd3ad577a88c16f6df8280c05b1ccde3e9ca2bbd1a9147f148"} Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.343588 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e0a5-account-create-update-rwzs7" event={"ID":"5754eedb-9e1a-4f09-a0cd-9e16659b5708","Type":"ContainerStarted","Data":"45d6a70d4d8e02fd331093b6ab3ddf3a49ca2d1b3509d9175942608cc68fa55d"} Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.345737 4713 generic.go:334] "Generic (PLEG): container finished" podID="7e78b6f7-7c44-4bb4-b9a9-b763d463466f" containerID="18e2efca262353feb30b01bab2ea527604ccca1bd75e4c7b9a97898a2555fcfa" exitCode=0 Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.345781 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-87vcz" event={"ID":"7e78b6f7-7c44-4bb4-b9a9-b763d463466f","Type":"ContainerDied","Data":"18e2efca262353feb30b01bab2ea527604ccca1bd75e4c7b9a97898a2555fcfa"} Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.353187 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-afae-account-create-update-f5xks" event={"ID":"b12849c0-1ce5-4551-acf4-75f9fdc74fed","Type":"ContainerStarted","Data":"28eddeb49002ffe87fa081a8a1d6780408b16fbc487d1ac683dffab0a0d50528"} Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.496227 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-90d6-account-create-update-7jgpf" podStartSLOduration=3.496195844 podStartE2EDuration="3.496195844s" podCreationTimestamp="2026-01-26 15:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:13.464078091 +0000 UTC m=+1288.601095326" watchObservedRunningTime="2026-01-26 15:55:13.496195844 +0000 UTC m=+1288.633213069" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.505467 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-e0a5-account-create-update-rwzs7" podStartSLOduration=3.505446191 podStartE2EDuration="3.505446191s" podCreationTimestamp="2026-01-26 15:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:13.482162744 +0000 UTC m=+1288.619179989" watchObservedRunningTime="2026-01-26 15:55:13.505446191 +0000 UTC m=+1288.642463426" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.654722 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-5m6jp"] Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.655996 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5m6jp" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.659154 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.659237 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2wsv8" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.660218 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.660573 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.674577 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5m6jp"] Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.687688 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f04dc7-c644-4c8a-ac31-721292a6874d-config-data\") pod \"keystone-db-sync-5m6jp\" (UID: \"d1f04dc7-c644-4c8a-ac31-721292a6874d\") " pod="openstack/keystone-db-sync-5m6jp" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.687837 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkrc6\" (UniqueName: \"kubernetes.io/projected/d1f04dc7-c644-4c8a-ac31-721292a6874d-kube-api-access-hkrc6\") pod \"keystone-db-sync-5m6jp\" (UID: \"d1f04dc7-c644-4c8a-ac31-721292a6874d\") " pod="openstack/keystone-db-sync-5m6jp" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.688038 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f04dc7-c644-4c8a-ac31-721292a6874d-combined-ca-bundle\") pod \"keystone-db-sync-5m6jp\" (UID: \"d1f04dc7-c644-4c8a-ac31-721292a6874d\") " pod="openstack/keystone-db-sync-5m6jp" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.749566 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-xfq6j"] Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.751724 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xfq6j" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.753777 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7s45m" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.754857 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.779488 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xfq6j"] Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.789234 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-config-data\") pod \"glance-db-sync-xfq6j\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " pod="openstack/glance-db-sync-xfq6j" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.789292 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-combined-ca-bundle\") pod \"glance-db-sync-xfq6j\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " pod="openstack/glance-db-sync-xfq6j" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.789345 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f04dc7-c644-4c8a-ac31-721292a6874d-config-data\") pod \"keystone-db-sync-5m6jp\" (UID: \"d1f04dc7-c644-4c8a-ac31-721292a6874d\") " pod="openstack/keystone-db-sync-5m6jp" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.789385 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkrc6\" (UniqueName: \"kubernetes.io/projected/d1f04dc7-c644-4c8a-ac31-721292a6874d-kube-api-access-hkrc6\") pod \"keystone-db-sync-5m6jp\" (UID: \"d1f04dc7-c644-4c8a-ac31-721292a6874d\") " pod="openstack/keystone-db-sync-5m6jp" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.789516 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f04dc7-c644-4c8a-ac31-721292a6874d-combined-ca-bundle\") pod \"keystone-db-sync-5m6jp\" (UID: \"d1f04dc7-c644-4c8a-ac31-721292a6874d\") " pod="openstack/keystone-db-sync-5m6jp" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.789590 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-db-sync-config-data\") pod \"glance-db-sync-xfq6j\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " pod="openstack/glance-db-sync-xfq6j" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.789621 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8626j\" (UniqueName: \"kubernetes.io/projected/67bee733-1013-44d9-ac74-5ce552dbb606-kube-api-access-8626j\") pod \"glance-db-sync-xfq6j\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " pod="openstack/glance-db-sync-xfq6j" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.797124 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f04dc7-c644-4c8a-ac31-721292a6874d-combined-ca-bundle\") pod \"keystone-db-sync-5m6jp\" (UID: \"d1f04dc7-c644-4c8a-ac31-721292a6874d\") " pod="openstack/keystone-db-sync-5m6jp" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.805485 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f04dc7-c644-4c8a-ac31-721292a6874d-config-data\") pod \"keystone-db-sync-5m6jp\" (UID: \"d1f04dc7-c644-4c8a-ac31-721292a6874d\") " pod="openstack/keystone-db-sync-5m6jp" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.815601 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkrc6\" (UniqueName: \"kubernetes.io/projected/d1f04dc7-c644-4c8a-ac31-721292a6874d-kube-api-access-hkrc6\") pod \"keystone-db-sync-5m6jp\" (UID: \"d1f04dc7-c644-4c8a-ac31-721292a6874d\") " pod="openstack/keystone-db-sync-5m6jp" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.892561 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-db-sync-config-data\") pod \"glance-db-sync-xfq6j\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " pod="openstack/glance-db-sync-xfq6j" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.892622 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8626j\" (UniqueName: \"kubernetes.io/projected/67bee733-1013-44d9-ac74-5ce552dbb606-kube-api-access-8626j\") pod \"glance-db-sync-xfq6j\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " pod="openstack/glance-db-sync-xfq6j" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.892662 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-config-data\") pod \"glance-db-sync-xfq6j\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " pod="openstack/glance-db-sync-xfq6j" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.892690 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-combined-ca-bundle\") pod \"glance-db-sync-xfq6j\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " pod="openstack/glance-db-sync-xfq6j" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.897592 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-config-data\") pod \"glance-db-sync-xfq6j\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " pod="openstack/glance-db-sync-xfq6j" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.898503 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-db-sync-config-data\") pod \"glance-db-sync-xfq6j\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " pod="openstack/glance-db-sync-xfq6j" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.901266 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-combined-ca-bundle\") pod \"glance-db-sync-xfq6j\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " pod="openstack/glance-db-sync-xfq6j" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.914038 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8626j\" (UniqueName: \"kubernetes.io/projected/67bee733-1013-44d9-ac74-5ce552dbb606-kube-api-access-8626j\") pod \"glance-db-sync-xfq6j\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " pod="openstack/glance-db-sync-xfq6j" Jan 26 15:55:13 crc kubenswrapper[4713]: I0126 15:55:13.979168 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5m6jp" Jan 26 15:55:14 crc kubenswrapper[4713]: I0126 15:55:14.077732 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xfq6j" Jan 26 15:55:14 crc kubenswrapper[4713]: I0126 15:55:14.433058 4713 generic.go:334] "Generic (PLEG): container finished" podID="b12849c0-1ce5-4551-acf4-75f9fdc74fed" containerID="bf57827960b5474dc1e871957025838887f258c001d25a4642e797275e7d10ed" exitCode=0 Jan 26 15:55:14 crc kubenswrapper[4713]: I0126 15:55:14.433315 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-afae-account-create-update-f5xks" event={"ID":"b12849c0-1ce5-4551-acf4-75f9fdc74fed","Type":"ContainerDied","Data":"bf57827960b5474dc1e871957025838887f258c001d25a4642e797275e7d10ed"} Jan 26 15:55:14 crc kubenswrapper[4713]: I0126 15:55:14.442939 4713 generic.go:334] "Generic (PLEG): container finished" podID="92ae0895-a996-466d-800c-14494b72c006" containerID="216fcaaf8eea7907695d61560a81b00283911dd81a4f6110338464f3e36cf2ef" exitCode=0 Jan 26 15:55:14 crc kubenswrapper[4713]: I0126 15:55:14.443005 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-90d6-account-create-update-7jgpf" event={"ID":"92ae0895-a996-466d-800c-14494b72c006","Type":"ContainerDied","Data":"216fcaaf8eea7907695d61560a81b00283911dd81a4f6110338464f3e36cf2ef"} Jan 26 15:55:14 crc kubenswrapper[4713]: I0126 15:55:14.462169 4713 generic.go:334] "Generic (PLEG): container finished" podID="d2a98b39-d817-42a6-914f-529499cfc4bc" containerID="18d58f573953e226423c3188aa1d007ab34640ced5883d5de3bd364ffe84b26b" exitCode=0 Jan 26 15:55:14 crc kubenswrapper[4713]: I0126 15:55:14.462238 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-bd27-account-create-update-584sq" event={"ID":"d2a98b39-d817-42a6-914f-529499cfc4bc","Type":"ContainerDied","Data":"18d58f573953e226423c3188aa1d007ab34640ced5883d5de3bd364ffe84b26b"} Jan 26 15:55:14 crc kubenswrapper[4713]: I0126 15:55:14.471222 4713 generic.go:334] "Generic (PLEG): container finished" podID="0880b6cd-9c82-432d-8ca2-e536c3f9a68f" containerID="4cd83c03789f811761c6e86f06a0e75a66639c3562f95fa9f839c757fdf302b7" exitCode=0 Jan 26 15:55:14 crc kubenswrapper[4713]: I0126 15:55:14.471332 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5qpqg" event={"ID":"0880b6cd-9c82-432d-8ca2-e536c3f9a68f","Type":"ContainerDied","Data":"4cd83c03789f811761c6e86f06a0e75a66639c3562f95fa9f839c757fdf302b7"} Jan 26 15:55:14 crc kubenswrapper[4713]: I0126 15:55:14.478538 4713 generic.go:334] "Generic (PLEG): container finished" podID="5754eedb-9e1a-4f09-a0cd-9e16659b5708" containerID="f54ae1c1c93573cd3ad577a88c16f6df8280c05b1ccde3e9ca2bbd1a9147f148" exitCode=0 Jan 26 15:55:14 crc kubenswrapper[4713]: I0126 15:55:14.478764 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e0a5-account-create-update-rwzs7" event={"ID":"5754eedb-9e1a-4f09-a0cd-9e16659b5708","Type":"ContainerDied","Data":"f54ae1c1c93573cd3ad577a88c16f6df8280c05b1ccde3e9ca2bbd1a9147f148"} Jan 26 15:55:14 crc kubenswrapper[4713]: I0126 15:55:14.479918 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5m6jp"] Jan 26 15:55:14 crc kubenswrapper[4713]: I0126 15:55:14.790573 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="a45d2a2d-be1b-476e-8fbf-f9bdd5a97301" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.235389 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-6826v" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.264794 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dvtkq" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.272114 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-87vcz" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.279637 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6fwxk" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.330653 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e78b6f7-7c44-4bb4-b9a9-b763d463466f-operator-scripts\") pod \"7e78b6f7-7c44-4bb4-b9a9-b763d463466f\" (UID: \"7e78b6f7-7c44-4bb4-b9a9-b763d463466f\") " Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.330742 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dmgx\" (UniqueName: \"kubernetes.io/projected/7dc642ec-46c1-47a0-a022-3259e2d47d42-kube-api-access-8dmgx\") pod \"7dc642ec-46c1-47a0-a022-3259e2d47d42\" (UID: \"7dc642ec-46c1-47a0-a022-3259e2d47d42\") " Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.330823 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a4a6995-a67d-4640-b110-32227664c658-operator-scripts\") pod \"4a4a6995-a67d-4640-b110-32227664c658\" (UID: \"4a4a6995-a67d-4640-b110-32227664c658\") " Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.330856 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dc642ec-46c1-47a0-a022-3259e2d47d42-operator-scripts\") pod \"7dc642ec-46c1-47a0-a022-3259e2d47d42\" (UID: \"7dc642ec-46c1-47a0-a022-3259e2d47d42\") " Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.330913 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swwp4\" (UniqueName: \"kubernetes.io/projected/864c7381-a1b5-4e9c-986a-9c7368508fd0-kube-api-access-swwp4\") pod \"864c7381-a1b5-4e9c-986a-9c7368508fd0\" (UID: \"864c7381-a1b5-4e9c-986a-9c7368508fd0\") " Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.330950 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/864c7381-a1b5-4e9c-986a-9c7368508fd0-operator-scripts\") pod \"864c7381-a1b5-4e9c-986a-9c7368508fd0\" (UID: \"864c7381-a1b5-4e9c-986a-9c7368508fd0\") " Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.330988 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq4gg\" (UniqueName: \"kubernetes.io/projected/4a4a6995-a67d-4640-b110-32227664c658-kube-api-access-pq4gg\") pod \"4a4a6995-a67d-4640-b110-32227664c658\" (UID: \"4a4a6995-a67d-4640-b110-32227664c658\") " Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.331022 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4llhz\" (UniqueName: \"kubernetes.io/projected/7e78b6f7-7c44-4bb4-b9a9-b763d463466f-kube-api-access-4llhz\") pod \"7e78b6f7-7c44-4bb4-b9a9-b763d463466f\" (UID: \"7e78b6f7-7c44-4bb4-b9a9-b763d463466f\") " Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.339054 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e78b6f7-7c44-4bb4-b9a9-b763d463466f-kube-api-access-4llhz" (OuterVolumeSpecName: "kube-api-access-4llhz") pod "7e78b6f7-7c44-4bb4-b9a9-b763d463466f" (UID: "7e78b6f7-7c44-4bb4-b9a9-b763d463466f"). InnerVolumeSpecName "kube-api-access-4llhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.340527 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e78b6f7-7c44-4bb4-b9a9-b763d463466f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e78b6f7-7c44-4bb4-b9a9-b763d463466f" (UID: "7e78b6f7-7c44-4bb4-b9a9-b763d463466f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.343970 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a4a6995-a67d-4640-b110-32227664c658-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a4a6995-a67d-4640-b110-32227664c658" (UID: "4a4a6995-a67d-4640-b110-32227664c658"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.344947 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc642ec-46c1-47a0-a022-3259e2d47d42-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7dc642ec-46c1-47a0-a022-3259e2d47d42" (UID: "7dc642ec-46c1-47a0-a022-3259e2d47d42"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.345757 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/864c7381-a1b5-4e9c-986a-9c7368508fd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "864c7381-a1b5-4e9c-986a-9c7368508fd0" (UID: "864c7381-a1b5-4e9c-986a-9c7368508fd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.347730 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/864c7381-a1b5-4e9c-986a-9c7368508fd0-kube-api-access-swwp4" (OuterVolumeSpecName: "kube-api-access-swwp4") pod "864c7381-a1b5-4e9c-986a-9c7368508fd0" (UID: "864c7381-a1b5-4e9c-986a-9c7368508fd0"). InnerVolumeSpecName "kube-api-access-swwp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.350496 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc642ec-46c1-47a0-a022-3259e2d47d42-kube-api-access-8dmgx" (OuterVolumeSpecName: "kube-api-access-8dmgx") pod "7dc642ec-46c1-47a0-a022-3259e2d47d42" (UID: "7dc642ec-46c1-47a0-a022-3259e2d47d42"). InnerVolumeSpecName "kube-api-access-8dmgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.350553 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a4a6995-a67d-4640-b110-32227664c658-kube-api-access-pq4gg" (OuterVolumeSpecName: "kube-api-access-pq4gg") pod "4a4a6995-a67d-4640-b110-32227664c658" (UID: "4a4a6995-a67d-4640-b110-32227664c658"). InnerVolumeSpecName "kube-api-access-pq4gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.432594 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dmgx\" (UniqueName: \"kubernetes.io/projected/7dc642ec-46c1-47a0-a022-3259e2d47d42-kube-api-access-8dmgx\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.432631 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a4a6995-a67d-4640-b110-32227664c658-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.432641 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dc642ec-46c1-47a0-a022-3259e2d47d42-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.432652 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swwp4\" (UniqueName: \"kubernetes.io/projected/864c7381-a1b5-4e9c-986a-9c7368508fd0-kube-api-access-swwp4\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.432661 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/864c7381-a1b5-4e9c-986a-9c7368508fd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.432670 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq4gg\" (UniqueName: \"kubernetes.io/projected/4a4a6995-a67d-4640-b110-32227664c658-kube-api-access-pq4gg\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.432679 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4llhz\" (UniqueName: \"kubernetes.io/projected/7e78b6f7-7c44-4bb4-b9a9-b763d463466f-kube-api-access-4llhz\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.432689 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e78b6f7-7c44-4bb4-b9a9-b763d463466f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.442912 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.443199 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerName="prometheus" containerID="cri-o://216ea4814cf70f73837082e6ba6706de7ae7d6c3f28865b7f62196a1d7825419" gracePeriod=600 Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.443312 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerName="thanos-sidecar" containerID="cri-o://6ade85f769ecc88afcb608235aca93e4dacb847e3e88a69786faf1b28018c6ec" gracePeriod=600 Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.443372 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerName="config-reloader" containerID="cri-o://cf102c4cad020291c4283d2335f8456bed242d1e954a454c28b692d2172bece3" gracePeriod=600 Jan 26 15:55:15 crc kubenswrapper[4713]: W0126 15:55:15.456537 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67bee733_1013_44d9_ac74_5ce552dbb606.slice/crio-418d1f896e82904c9a93c5b12ed7ea24cee2766cd39d7dcf85b6132ebf4604f2 WatchSource:0}: Error finding container 418d1f896e82904c9a93c5b12ed7ea24cee2766cd39d7dcf85b6132ebf4604f2: Status 404 returned error can't find the container with id 418d1f896e82904c9a93c5b12ed7ea24cee2766cd39d7dcf85b6132ebf4604f2 Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.468830 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xfq6j"] Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.489960 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e6820209-510a-4346-b86d-006535127cc9","Type":"ContainerStarted","Data":"7664b872dcccddc07b5ddeec1b752320e829879dc6a1317c9bae8f1194797ac8"} Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.490002 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e6820209-510a-4346-b86d-006535127cc9","Type":"ContainerStarted","Data":"387b6ec6cb628f7326c7b5c174bb846b4616985bd58d70e3b557ce9acd303d2c"} Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.490934 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.492293 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xfq6j" event={"ID":"67bee733-1013-44d9-ac74-5ce552dbb606","Type":"ContainerStarted","Data":"418d1f896e82904c9a93c5b12ed7ea24cee2766cd39d7dcf85b6132ebf4604f2"} Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.494158 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6fwxk" event={"ID":"4a4a6995-a67d-4640-b110-32227664c658","Type":"ContainerDied","Data":"37ebe1445dbd0c3ae34fb630bc49151229259b7448e93afd1ae7874347eefd44"} Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.494192 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37ebe1445dbd0c3ae34fb630bc49151229259b7448e93afd1ae7874347eefd44" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.494253 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6fwxk" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.496686 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-87vcz" event={"ID":"7e78b6f7-7c44-4bb4-b9a9-b763d463466f","Type":"ContainerDied","Data":"a5f94fb25c7244924cd4631d206aea7ae19a8b8aa48ccdbedd317204ad1957fe"} Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.496842 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5f94fb25c7244924cd4631d206aea7ae19a8b8aa48ccdbedd317204ad1957fe" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.497001 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-87vcz" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.506747 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dvtkq" event={"ID":"7dc642ec-46c1-47a0-a022-3259e2d47d42","Type":"ContainerDied","Data":"f2fd675b8543844b2c2542fc055f583d1f34a7a44ca10850c7031873a0e51b98"} Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.506793 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2fd675b8543844b2c2542fc055f583d1f34a7a44ca10850c7031873a0e51b98" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.506846 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dvtkq" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.517099 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.218057344 podStartE2EDuration="7.517083338s" podCreationTimestamp="2026-01-26 15:55:08 +0000 UTC" firstStartedPulling="2026-01-26 15:55:09.752604977 +0000 UTC m=+1284.889622212" lastFinishedPulling="2026-01-26 15:55:14.051630971 +0000 UTC m=+1289.188648206" observedRunningTime="2026-01-26 15:55:15.512941193 +0000 UTC m=+1290.649958428" watchObservedRunningTime="2026-01-26 15:55:15.517083338 +0000 UTC m=+1290.654100573" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.526475 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-6826v" event={"ID":"864c7381-a1b5-4e9c-986a-9c7368508fd0","Type":"ContainerDied","Data":"488a4d5940ca8f5d37fee0d9278583686d557eedb3320b2ebd897e74119020c6"} Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.526527 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="488a4d5940ca8f5d37fee0d9278583686d557eedb3320b2ebd897e74119020c6" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.526607 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-6826v" Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.530941 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5m6jp" event={"ID":"d1f04dc7-c644-4c8a-ac31-721292a6874d","Type":"ContainerStarted","Data":"ec0df6b5451d4e7bfb145ca4e5393b4c68ef4035e1cd2afeacd70f8a8edf25c8"} Jan 26 15:55:15 crc kubenswrapper[4713]: I0126 15:55:15.692316 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.113:9090/-/ready\": dial tcp 10.217.0.113:9090: connect: connection refused" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.121092 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-90d6-account-create-update-7jgpf" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.257665 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92ae0895-a996-466d-800c-14494b72c006-operator-scripts\") pod \"92ae0895-a996-466d-800c-14494b72c006\" (UID: \"92ae0895-a996-466d-800c-14494b72c006\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.257718 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psr87\" (UniqueName: \"kubernetes.io/projected/92ae0895-a996-466d-800c-14494b72c006-kube-api-access-psr87\") pod \"92ae0895-a996-466d-800c-14494b72c006\" (UID: \"92ae0895-a996-466d-800c-14494b72c006\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.260383 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92ae0895-a996-466d-800c-14494b72c006-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92ae0895-a996-466d-800c-14494b72c006" (UID: "92ae0895-a996-466d-800c-14494b72c006"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.260974 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92ae0895-a996-466d-800c-14494b72c006-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.270903 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92ae0895-a996-466d-800c-14494b72c006-kube-api-access-psr87" (OuterVolumeSpecName: "kube-api-access-psr87") pod "92ae0895-a996-466d-800c-14494b72c006" (UID: "92ae0895-a996-466d-800c-14494b72c006"). InnerVolumeSpecName "kube-api-access-psr87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.364380 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psr87\" (UniqueName: \"kubernetes.io/projected/92ae0895-a996-466d-800c-14494b72c006-kube-api-access-psr87\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.384409 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6fwxk"] Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.400589 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6fwxk"] Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.553434 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5qpqg" event={"ID":"0880b6cd-9c82-432d-8ca2-e536c3f9a68f","Type":"ContainerDied","Data":"3098dfb84cd7250880a6578e5604878282ed35877f0e2fa9c408b5ecdb505421"} Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.553485 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3098dfb84cd7250880a6578e5604878282ed35877f0e2fa9c408b5ecdb505421" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.559794 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e0a5-account-create-update-rwzs7" event={"ID":"5754eedb-9e1a-4f09-a0cd-9e16659b5708","Type":"ContainerDied","Data":"45d6a70d4d8e02fd331093b6ab3ddf3a49ca2d1b3509d9175942608cc68fa55d"} Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.559820 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45d6a70d4d8e02fd331093b6ab3ddf3a49ca2d1b3509d9175942608cc68fa55d" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.565647 4713 generic.go:334] "Generic (PLEG): container finished" podID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerID="6ade85f769ecc88afcb608235aca93e4dacb847e3e88a69786faf1b28018c6ec" exitCode=0 Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.565679 4713 generic.go:334] "Generic (PLEG): container finished" podID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerID="cf102c4cad020291c4283d2335f8456bed242d1e954a454c28b692d2172bece3" exitCode=0 Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.565686 4713 generic.go:334] "Generic (PLEG): container finished" podID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerID="216ea4814cf70f73837082e6ba6706de7ae7d6c3f28865b7f62196a1d7825419" exitCode=0 Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.565691 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78543593-d6da-448f-adf7-e1ead58bfb5f","Type":"ContainerDied","Data":"6ade85f769ecc88afcb608235aca93e4dacb847e3e88a69786faf1b28018c6ec"} Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.565750 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78543593-d6da-448f-adf7-e1ead58bfb5f","Type":"ContainerDied","Data":"cf102c4cad020291c4283d2335f8456bed242d1e954a454c28b692d2172bece3"} Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.565776 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78543593-d6da-448f-adf7-e1ead58bfb5f","Type":"ContainerDied","Data":"216ea4814cf70f73837082e6ba6706de7ae7d6c3f28865b7f62196a1d7825419"} Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.569964 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-afae-account-create-update-f5xks" event={"ID":"b12849c0-1ce5-4551-acf4-75f9fdc74fed","Type":"ContainerDied","Data":"28eddeb49002ffe87fa081a8a1d6780408b16fbc487d1ac683dffab0a0d50528"} Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.569995 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28eddeb49002ffe87fa081a8a1d6780408b16fbc487d1ac683dffab0a0d50528" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.574338 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-90d6-account-create-update-7jgpf" event={"ID":"92ae0895-a996-466d-800c-14494b72c006","Type":"ContainerDied","Data":"6fdb140d1bbf6be11e29f298e6af6479331ac7ef7459ed7b1a7461ea17a50c53"} Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.574380 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-90d6-account-create-update-7jgpf" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.574393 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fdb140d1bbf6be11e29f298e6af6479331ac7ef7459ed7b1a7461ea17a50c53" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.580180 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-bd27-account-create-update-584sq" event={"ID":"d2a98b39-d817-42a6-914f-529499cfc4bc","Type":"ContainerDied","Data":"56b4d0d22c1adf36c673f2cd52e8f7544a9e60203a45272423dc390636906835"} Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.580235 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56b4d0d22c1adf36c673f2cd52e8f7544a9e60203a45272423dc390636906835" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.623612 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-bd27-account-create-update-584sq" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.632160 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e0a5-account-create-update-rwzs7" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.649384 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-afae-account-create-update-f5xks" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.676014 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5qpqg" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.684909 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.771131 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hddbl\" (UniqueName: \"kubernetes.io/projected/5754eedb-9e1a-4f09-a0cd-9e16659b5708-kube-api-access-hddbl\") pod \"5754eedb-9e1a-4f09-a0cd-9e16659b5708\" (UID: \"5754eedb-9e1a-4f09-a0cd-9e16659b5708\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.771178 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5754eedb-9e1a-4f09-a0cd-9e16659b5708-operator-scripts\") pod \"5754eedb-9e1a-4f09-a0cd-9e16659b5708\" (UID: \"5754eedb-9e1a-4f09-a0cd-9e16659b5708\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.771272 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2a98b39-d817-42a6-914f-529499cfc4bc-operator-scripts\") pod \"d2a98b39-d817-42a6-914f-529499cfc4bc\" (UID: \"d2a98b39-d817-42a6-914f-529499cfc4bc\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.771311 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12849c0-1ce5-4551-acf4-75f9fdc74fed-operator-scripts\") pod \"b12849c0-1ce5-4551-acf4-75f9fdc74fed\" (UID: \"b12849c0-1ce5-4551-acf4-75f9fdc74fed\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.771383 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d645r\" (UniqueName: \"kubernetes.io/projected/b12849c0-1ce5-4551-acf4-75f9fdc74fed-kube-api-access-d645r\") pod \"b12849c0-1ce5-4551-acf4-75f9fdc74fed\" (UID: \"b12849c0-1ce5-4551-acf4-75f9fdc74fed\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.771423 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkzc9\" (UniqueName: \"kubernetes.io/projected/d2a98b39-d817-42a6-914f-529499cfc4bc-kube-api-access-tkzc9\") pod \"d2a98b39-d817-42a6-914f-529499cfc4bc\" (UID: \"d2a98b39-d817-42a6-914f-529499cfc4bc\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.772421 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a98b39-d817-42a6-914f-529499cfc4bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d2a98b39-d817-42a6-914f-529499cfc4bc" (UID: "d2a98b39-d817-42a6-914f-529499cfc4bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.772862 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5754eedb-9e1a-4f09-a0cd-9e16659b5708-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5754eedb-9e1a-4f09-a0cd-9e16659b5708" (UID: "5754eedb-9e1a-4f09-a0cd-9e16659b5708"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.773733 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b12849c0-1ce5-4551-acf4-75f9fdc74fed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b12849c0-1ce5-4551-acf4-75f9fdc74fed" (UID: "b12849c0-1ce5-4551-acf4-75f9fdc74fed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.783508 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b12849c0-1ce5-4551-acf4-75f9fdc74fed-kube-api-access-d645r" (OuterVolumeSpecName: "kube-api-access-d645r") pod "b12849c0-1ce5-4551-acf4-75f9fdc74fed" (UID: "b12849c0-1ce5-4551-acf4-75f9fdc74fed"). InnerVolumeSpecName "kube-api-access-d645r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.797749 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5754eedb-9e1a-4f09-a0cd-9e16659b5708-kube-api-access-hddbl" (OuterVolumeSpecName: "kube-api-access-hddbl") pod "5754eedb-9e1a-4f09-a0cd-9e16659b5708" (UID: "5754eedb-9e1a-4f09-a0cd-9e16659b5708"). InnerVolumeSpecName "kube-api-access-hddbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.797901 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a98b39-d817-42a6-914f-529499cfc4bc-kube-api-access-tkzc9" (OuterVolumeSpecName: "kube-api-access-tkzc9") pod "d2a98b39-d817-42a6-914f-529499cfc4bc" (UID: "d2a98b39-d817-42a6-914f-529499cfc4bc"). InnerVolumeSpecName "kube-api-access-tkzc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.872566 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-config\") pod \"78543593-d6da-448f-adf7-e1ead58bfb5f\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.872608 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-web-config\") pod \"78543593-d6da-448f-adf7-e1ead58bfb5f\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.872639 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k7lz\" (UniqueName: \"kubernetes.io/projected/78543593-d6da-448f-adf7-e1ead58bfb5f-kube-api-access-2k7lz\") pod \"78543593-d6da-448f-adf7-e1ead58bfb5f\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.872691 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/78543593-d6da-448f-adf7-e1ead58bfb5f-tls-assets\") pod \"78543593-d6da-448f-adf7-e1ead58bfb5f\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.872728 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-thanos-prometheus-http-client-file\") pod \"78543593-d6da-448f-adf7-e1ead58bfb5f\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.872772 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0880b6cd-9c82-432d-8ca2-e536c3f9a68f-operator-scripts\") pod \"0880b6cd-9c82-432d-8ca2-e536c3f9a68f\" (UID: \"0880b6cd-9c82-432d-8ca2-e536c3f9a68f\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.872786 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbs88\" (UniqueName: \"kubernetes.io/projected/0880b6cd-9c82-432d-8ca2-e536c3f9a68f-kube-api-access-gbs88\") pod \"0880b6cd-9c82-432d-8ca2-e536c3f9a68f\" (UID: \"0880b6cd-9c82-432d-8ca2-e536c3f9a68f\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.872801 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-2\") pod \"78543593-d6da-448f-adf7-e1ead58bfb5f\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.872880 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-1\") pod \"78543593-d6da-448f-adf7-e1ead58bfb5f\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.872973 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\") pod \"78543593-d6da-448f-adf7-e1ead58bfb5f\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.873034 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-0\") pod \"78543593-d6da-448f-adf7-e1ead58bfb5f\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.873058 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/78543593-d6da-448f-adf7-e1ead58bfb5f-config-out\") pod \"78543593-d6da-448f-adf7-e1ead58bfb5f\" (UID: \"78543593-d6da-448f-adf7-e1ead58bfb5f\") " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.875925 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hddbl\" (UniqueName: \"kubernetes.io/projected/5754eedb-9e1a-4f09-a0cd-9e16659b5708-kube-api-access-hddbl\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.876819 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5754eedb-9e1a-4f09-a0cd-9e16659b5708-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.876893 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2a98b39-d817-42a6-914f-529499cfc4bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.877221 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12849c0-1ce5-4551-acf4-75f9fdc74fed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.877276 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d645r\" (UniqueName: \"kubernetes.io/projected/b12849c0-1ce5-4551-acf4-75f9fdc74fed-kube-api-access-d645r\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.877295 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkzc9\" (UniqueName: \"kubernetes.io/projected/d2a98b39-d817-42a6-914f-529499cfc4bc-kube-api-access-tkzc9\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.877546 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78543593-d6da-448f-adf7-e1ead58bfb5f-config-out" (OuterVolumeSpecName: "config-out") pod "78543593-d6da-448f-adf7-e1ead58bfb5f" (UID: "78543593-d6da-448f-adf7-e1ead58bfb5f"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.877770 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0880b6cd-9c82-432d-8ca2-e536c3f9a68f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0880b6cd-9c82-432d-8ca2-e536c3f9a68f" (UID: "0880b6cd-9c82-432d-8ca2-e536c3f9a68f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.878727 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "78543593-d6da-448f-adf7-e1ead58bfb5f" (UID: "78543593-d6da-448f-adf7-e1ead58bfb5f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.879039 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "78543593-d6da-448f-adf7-e1ead58bfb5f" (UID: "78543593-d6da-448f-adf7-e1ead58bfb5f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.881115 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-config" (OuterVolumeSpecName: "config") pod "78543593-d6da-448f-adf7-e1ead58bfb5f" (UID: "78543593-d6da-448f-adf7-e1ead58bfb5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.883490 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78543593-d6da-448f-adf7-e1ead58bfb5f-kube-api-access-2k7lz" (OuterVolumeSpecName: "kube-api-access-2k7lz") pod "78543593-d6da-448f-adf7-e1ead58bfb5f" (UID: "78543593-d6da-448f-adf7-e1ead58bfb5f"). InnerVolumeSpecName "kube-api-access-2k7lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.885067 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "78543593-d6da-448f-adf7-e1ead58bfb5f" (UID: "78543593-d6da-448f-adf7-e1ead58bfb5f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.885152 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "78543593-d6da-448f-adf7-e1ead58bfb5f" (UID: "78543593-d6da-448f-adf7-e1ead58bfb5f"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.885693 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0880b6cd-9c82-432d-8ca2-e536c3f9a68f-kube-api-access-gbs88" (OuterVolumeSpecName: "kube-api-access-gbs88") pod "0880b6cd-9c82-432d-8ca2-e536c3f9a68f" (UID: "0880b6cd-9c82-432d-8ca2-e536c3f9a68f"). InnerVolumeSpecName "kube-api-access-gbs88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.896529 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "78543593-d6da-448f-adf7-e1ead58bfb5f" (UID: "78543593-d6da-448f-adf7-e1ead58bfb5f"). InnerVolumeSpecName "pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.898981 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78543593-d6da-448f-adf7-e1ead58bfb5f-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "78543593-d6da-448f-adf7-e1ead58bfb5f" (UID: "78543593-d6da-448f-adf7-e1ead58bfb5f"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.920611 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-web-config" (OuterVolumeSpecName: "web-config") pod "78543593-d6da-448f-adf7-e1ead58bfb5f" (UID: "78543593-d6da-448f-adf7-e1ead58bfb5f"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.979042 4713 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.979182 4713 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\") on node \"crc\" " Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.979196 4713 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.979209 4713 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/78543593-d6da-448f-adf7-e1ead58bfb5f-config-out\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.979221 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.979230 4713 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-web-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.979239 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k7lz\" (UniqueName: \"kubernetes.io/projected/78543593-d6da-448f-adf7-e1ead58bfb5f-kube-api-access-2k7lz\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.979248 4713 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/78543593-d6da-448f-adf7-e1ead58bfb5f-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.979257 4713 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/78543593-d6da-448f-adf7-e1ead58bfb5f-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.979266 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0880b6cd-9c82-432d-8ca2-e536c3f9a68f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.979282 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbs88\" (UniqueName: \"kubernetes.io/projected/0880b6cd-9c82-432d-8ca2-e536c3f9a68f-kube-api-access-gbs88\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:16 crc kubenswrapper[4713]: I0126 15:55:16.979293 4713 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/78543593-d6da-448f-adf7-e1ead58bfb5f-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.003437 4713 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.003653 4713 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa") on node "crc" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.083911 4713 reconciler_common.go:293] "Volume detached for volume \"pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.609957 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78543593-d6da-448f-adf7-e1ead58bfb5f","Type":"ContainerDied","Data":"f52cf428a199a308bf8defe41efe92502fe7063994479305f414d0205ae5c0d9"} Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.610480 4713 scope.go:117] "RemoveContainer" containerID="6ade85f769ecc88afcb608235aca93e4dacb847e3e88a69786faf1b28018c6ec" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.610007 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-afae-account-create-update-f5xks" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.610049 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5qpqg" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.610062 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-bd27-account-create-update-584sq" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.610080 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e0a5-account-create-update-rwzs7" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.610022 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.655549 4713 scope.go:117] "RemoveContainer" containerID="cf102c4cad020291c4283d2335f8456bed242d1e954a454c28b692d2172bece3" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.699872 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.701002 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.707971 4713 scope.go:117] "RemoveContainer" containerID="216ea4814cf70f73837082e6ba6706de7ae7d6c3f28865b7f62196a1d7825419" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.720434 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0432b2d-538e-4b04-899b-6fe666f340de-etc-swift\") pod \"swift-storage-0\" (UID: \"d0432b2d-538e-4b04-899b-6fe666f340de\") " pod="openstack/swift-storage-0" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.736841 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.751879 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.815938 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a4a6995-a67d-4640-b110-32227664c658" path="/var/lib/kubelet/pods/4a4a6995-a67d-4640-b110-32227664c658/volumes" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.817200 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" path="/var/lib/kubelet/pods/78543593-d6da-448f-adf7-e1ead58bfb5f/volumes" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.818018 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:55:17 crc kubenswrapper[4713]: E0126 15:55:17.818402 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0880b6cd-9c82-432d-8ca2-e536c3f9a68f" containerName="mariadb-database-create" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.818475 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="0880b6cd-9c82-432d-8ca2-e536c3f9a68f" containerName="mariadb-database-create" Jan 26 15:55:17 crc kubenswrapper[4713]: E0126 15:55:17.818568 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerName="init-config-reloader" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.818623 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerName="init-config-reloader" Jan 26 15:55:17 crc kubenswrapper[4713]: E0126 15:55:17.818715 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5754eedb-9e1a-4f09-a0cd-9e16659b5708" containerName="mariadb-account-create-update" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.818780 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="5754eedb-9e1a-4f09-a0cd-9e16659b5708" containerName="mariadb-account-create-update" Jan 26 15:55:17 crc kubenswrapper[4713]: E0126 15:55:17.818836 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a98b39-d817-42a6-914f-529499cfc4bc" containerName="mariadb-account-create-update" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.818892 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a98b39-d817-42a6-914f-529499cfc4bc" containerName="mariadb-account-create-update" Jan 26 15:55:17 crc kubenswrapper[4713]: E0126 15:55:17.818969 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b12849c0-1ce5-4551-acf4-75f9fdc74fed" containerName="mariadb-account-create-update" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.819027 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="b12849c0-1ce5-4551-acf4-75f9fdc74fed" containerName="mariadb-account-create-update" Jan 26 15:55:17 crc kubenswrapper[4713]: E0126 15:55:17.819115 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerName="prometheus" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.819178 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerName="prometheus" Jan 26 15:55:17 crc kubenswrapper[4713]: E0126 15:55:17.819238 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e78b6f7-7c44-4bb4-b9a9-b763d463466f" containerName="mariadb-database-create" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.819290 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e78b6f7-7c44-4bb4-b9a9-b763d463466f" containerName="mariadb-database-create" Jan 26 15:55:17 crc kubenswrapper[4713]: E0126 15:55:17.819354 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864c7381-a1b5-4e9c-986a-9c7368508fd0" containerName="mariadb-database-create" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.822010 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="864c7381-a1b5-4e9c-986a-9c7368508fd0" containerName="mariadb-database-create" Jan 26 15:55:17 crc kubenswrapper[4713]: E0126 15:55:17.822139 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ae0895-a996-466d-800c-14494b72c006" containerName="mariadb-account-create-update" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.822219 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ae0895-a996-466d-800c-14494b72c006" containerName="mariadb-account-create-update" Jan 26 15:55:17 crc kubenswrapper[4713]: E0126 15:55:17.822304 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerName="thanos-sidecar" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.822403 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerName="thanos-sidecar" Jan 26 15:55:17 crc kubenswrapper[4713]: E0126 15:55:17.822487 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4a6995-a67d-4640-b110-32227664c658" containerName="mariadb-account-create-update" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.822625 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4a6995-a67d-4640-b110-32227664c658" containerName="mariadb-account-create-update" Jan 26 15:55:17 crc kubenswrapper[4713]: E0126 15:55:17.822818 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc642ec-46c1-47a0-a022-3259e2d47d42" containerName="mariadb-database-create" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.822885 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc642ec-46c1-47a0-a022-3259e2d47d42" containerName="mariadb-database-create" Jan 26 15:55:17 crc kubenswrapper[4713]: E0126 15:55:17.822951 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerName="config-reloader" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.823005 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerName="config-reloader" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.823316 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="5754eedb-9e1a-4f09-a0cd-9e16659b5708" containerName="mariadb-account-create-update" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.823405 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="92ae0895-a996-466d-800c-14494b72c006" containerName="mariadb-account-create-update" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.823496 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e78b6f7-7c44-4bb4-b9a9-b763d463466f" containerName="mariadb-database-create" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.823889 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerName="config-reloader" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.824005 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="0880b6cd-9c82-432d-8ca2-e536c3f9a68f" containerName="mariadb-database-create" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.824104 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc642ec-46c1-47a0-a022-3259e2d47d42" containerName="mariadb-database-create" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.824274 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerName="thanos-sidecar" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.824383 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="78543593-d6da-448f-adf7-e1ead58bfb5f" containerName="prometheus" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.824467 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="864c7381-a1b5-4e9c-986a-9c7368508fd0" containerName="mariadb-database-create" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.824659 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a4a6995-a67d-4640-b110-32227664c658" containerName="mariadb-account-create-update" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.824800 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="b12849c0-1ce5-4551-acf4-75f9fdc74fed" containerName="mariadb-account-create-update" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.824891 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a98b39-d817-42a6-914f-529499cfc4bc" containerName="mariadb-account-create-update" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.824415 4713 scope.go:117] "RemoveContainer" containerID="dc5a619459a84dbf47717cb24a2a9866189e214b6d1072f1143f1f9d5871eb73" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.828072 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.840991 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.841683 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.841819 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.843934 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.844332 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.845183 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.845446 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.846876 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-hsnpc" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.850845 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.862293 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:55:17 crc kubenswrapper[4713]: I0126 15:55:17.954338 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-c9tvd" podUID="518d38d7-b30e-4d67-a3d7-456e26fc9869" containerName="ovn-controller" probeResult="failure" output=< Jan 26 15:55:17 crc kubenswrapper[4713]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 15:55:17 crc kubenswrapper[4713]: > Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.009906 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.010322 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3aa01a31-895a-4fcd-845b-264c0cec88de-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.010495 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-config\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.010567 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3aa01a31-895a-4fcd-845b-264c0cec88de-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.010648 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3aa01a31-895a-4fcd-845b-264c0cec88de-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.010702 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.010748 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3aa01a31-895a-4fcd-845b-264c0cec88de-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.010896 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.011481 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.011535 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.011572 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7t26\" (UniqueName: \"kubernetes.io/projected/3aa01a31-895a-4fcd-845b-264c0cec88de-kube-api-access-z7t26\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.011612 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3aa01a31-895a-4fcd-845b-264c0cec88de-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.011661 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.114031 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.114135 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.114196 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3aa01a31-895a-4fcd-845b-264c0cec88de-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.114230 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-config\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.114264 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3aa01a31-895a-4fcd-845b-264c0cec88de-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.114557 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3aa01a31-895a-4fcd-845b-264c0cec88de-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.114593 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.114635 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3aa01a31-895a-4fcd-845b-264c0cec88de-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.114721 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.114774 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.114817 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.114846 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7t26\" (UniqueName: \"kubernetes.io/projected/3aa01a31-895a-4fcd-845b-264c0cec88de-kube-api-access-z7t26\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.114881 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3aa01a31-895a-4fcd-845b-264c0cec88de-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.115310 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3aa01a31-895a-4fcd-845b-264c0cec88de-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.115810 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3aa01a31-895a-4fcd-845b-264c0cec88de-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.117408 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3aa01a31-895a-4fcd-845b-264c0cec88de-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.122979 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.123237 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.123261 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-config\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.123271 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.123343 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3aa01a31-895a-4fcd-845b-264c0cec88de-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.123565 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.125029 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3aa01a31-895a-4fcd-845b-264c0cec88de-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.125038 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3aa01a31-895a-4fcd-845b-264c0cec88de-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.132055 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.132350 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a3a47662dc62deaa080ed91fdc8d2453be14d746889aa742379c7becfb263ca9/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.141242 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7t26\" (UniqueName: \"kubernetes.io/projected/3aa01a31-895a-4fcd-845b-264c0cec88de-kube-api-access-z7t26\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.190083 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba78bb07-3b4c-41bf-b2f5-967af56507fa\") pod \"prometheus-metric-storage-0\" (UID: \"3aa01a31-895a-4fcd-845b-264c0cec88de\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.245873 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:18 crc kubenswrapper[4713]: I0126 15:55:18.504133 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 15:55:19 crc kubenswrapper[4713]: I0126 15:55:18.773604 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:55:19 crc kubenswrapper[4713]: I0126 15:55:19.148045 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:19 crc kubenswrapper[4713]: I0126 15:55:19.222540 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-q9fhm"] Jan 26 15:55:19 crc kubenswrapper[4713]: I0126 15:55:19.222897 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" podUID="7cec4b59-6054-4f9b-998f-35059f5a12d6" containerName="dnsmasq-dns" containerID="cri-o://38d64878c171696ca10230af8df3d35da9cf890b721c59982be9ad670b260434" gracePeriod=10 Jan 26 15:55:19 crc kubenswrapper[4713]: I0126 15:55:19.644239 4713 generic.go:334] "Generic (PLEG): container finished" podID="7cec4b59-6054-4f9b-998f-35059f5a12d6" containerID="38d64878c171696ca10230af8df3d35da9cf890b721c59982be9ad670b260434" exitCode=0 Jan 26 15:55:19 crc kubenswrapper[4713]: I0126 15:55:19.644338 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" event={"ID":"7cec4b59-6054-4f9b-998f-35059f5a12d6","Type":"ContainerDied","Data":"38d64878c171696ca10230af8df3d35da9cf890b721c59982be9ad670b260434"} Jan 26 15:55:19 crc kubenswrapper[4713]: I0126 15:55:19.714830 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" podUID="7cec4b59-6054-4f9b-998f-35059f5a12d6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Jan 26 15:55:20 crc kubenswrapper[4713]: I0126 15:55:20.015427 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9p9dm"] Jan 26 15:55:20 crc kubenswrapper[4713]: I0126 15:55:20.016772 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9p9dm" Jan 26 15:55:20 crc kubenswrapper[4713]: I0126 15:55:20.019812 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 15:55:20 crc kubenswrapper[4713]: I0126 15:55:20.061462 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9p9dm"] Jan 26 15:55:20 crc kubenswrapper[4713]: I0126 15:55:20.160980 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad281772-79dc-459e-8bae-83588fde23c6-operator-scripts\") pod \"root-account-create-update-9p9dm\" (UID: \"ad281772-79dc-459e-8bae-83588fde23c6\") " pod="openstack/root-account-create-update-9p9dm" Jan 26 15:55:20 crc kubenswrapper[4713]: I0126 15:55:20.161204 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzdfr\" (UniqueName: \"kubernetes.io/projected/ad281772-79dc-459e-8bae-83588fde23c6-kube-api-access-wzdfr\") pod \"root-account-create-update-9p9dm\" (UID: \"ad281772-79dc-459e-8bae-83588fde23c6\") " pod="openstack/root-account-create-update-9p9dm" Jan 26 15:55:20 crc kubenswrapper[4713]: I0126 15:55:20.262824 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad281772-79dc-459e-8bae-83588fde23c6-operator-scripts\") pod \"root-account-create-update-9p9dm\" (UID: \"ad281772-79dc-459e-8bae-83588fde23c6\") " pod="openstack/root-account-create-update-9p9dm" Jan 26 15:55:20 crc kubenswrapper[4713]: I0126 15:55:20.262946 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzdfr\" (UniqueName: \"kubernetes.io/projected/ad281772-79dc-459e-8bae-83588fde23c6-kube-api-access-wzdfr\") pod \"root-account-create-update-9p9dm\" (UID: \"ad281772-79dc-459e-8bae-83588fde23c6\") " pod="openstack/root-account-create-update-9p9dm" Jan 26 15:55:20 crc kubenswrapper[4713]: I0126 15:55:20.266297 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad281772-79dc-459e-8bae-83588fde23c6-operator-scripts\") pod \"root-account-create-update-9p9dm\" (UID: \"ad281772-79dc-459e-8bae-83588fde23c6\") " pod="openstack/root-account-create-update-9p9dm" Jan 26 15:55:20 crc kubenswrapper[4713]: I0126 15:55:20.287236 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzdfr\" (UniqueName: \"kubernetes.io/projected/ad281772-79dc-459e-8bae-83588fde23c6-kube-api-access-wzdfr\") pod \"root-account-create-update-9p9dm\" (UID: \"ad281772-79dc-459e-8bae-83588fde23c6\") " pod="openstack/root-account-create-update-9p9dm" Jan 26 15:55:20 crc kubenswrapper[4713]: I0126 15:55:20.369125 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9p9dm" Jan 26 15:55:21 crc kubenswrapper[4713]: W0126 15:55:21.990907 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aa01a31_895a_4fcd_845b_264c0cec88de.slice/crio-c21663907943fbce716d381e6335a7bbe8861712113d9f9807f174b6792fd8c7 WatchSource:0}: Error finding container c21663907943fbce716d381e6335a7bbe8861712113d9f9807f174b6792fd8c7: Status 404 returned error can't find the container with id c21663907943fbce716d381e6335a7bbe8861712113d9f9807f174b6792fd8c7 Jan 26 15:55:21 crc kubenswrapper[4713]: W0126 15:55:21.992609 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0432b2d_538e_4b04_899b_6fe666f340de.slice/crio-1dd8f8c60f3800be4319ee03e8085d736c6da3658894964bb7a2a32f4459ca54 WatchSource:0}: Error finding container 1dd8f8c60f3800be4319ee03e8085d736c6da3658894964bb7a2a32f4459ca54: Status 404 returned error can't find the container with id 1dd8f8c60f3800be4319ee03e8085d736c6da3658894964bb7a2a32f4459ca54 Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.394782 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.510962 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cec4b59-6054-4f9b-998f-35059f5a12d6-config\") pod \"7cec4b59-6054-4f9b-998f-35059f5a12d6\" (UID: \"7cec4b59-6054-4f9b-998f-35059f5a12d6\") " Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.511093 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cec4b59-6054-4f9b-998f-35059f5a12d6-dns-svc\") pod \"7cec4b59-6054-4f9b-998f-35059f5a12d6\" (UID: \"7cec4b59-6054-4f9b-998f-35059f5a12d6\") " Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.511202 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4lkb\" (UniqueName: \"kubernetes.io/projected/7cec4b59-6054-4f9b-998f-35059f5a12d6-kube-api-access-n4lkb\") pod \"7cec4b59-6054-4f9b-998f-35059f5a12d6\" (UID: \"7cec4b59-6054-4f9b-998f-35059f5a12d6\") " Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.527110 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cec4b59-6054-4f9b-998f-35059f5a12d6-kube-api-access-n4lkb" (OuterVolumeSpecName: "kube-api-access-n4lkb") pod "7cec4b59-6054-4f9b-998f-35059f5a12d6" (UID: "7cec4b59-6054-4f9b-998f-35059f5a12d6"). InnerVolumeSpecName "kube-api-access-n4lkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.576756 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cec4b59-6054-4f9b-998f-35059f5a12d6-config" (OuterVolumeSpecName: "config") pod "7cec4b59-6054-4f9b-998f-35059f5a12d6" (UID: "7cec4b59-6054-4f9b-998f-35059f5a12d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.593342 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cec4b59-6054-4f9b-998f-35059f5a12d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7cec4b59-6054-4f9b-998f-35059f5a12d6" (UID: "7cec4b59-6054-4f9b-998f-35059f5a12d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.613547 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cec4b59-6054-4f9b-998f-35059f5a12d6-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.613578 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cec4b59-6054-4f9b-998f-35059f5a12d6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.613588 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4lkb\" (UniqueName: \"kubernetes.io/projected/7cec4b59-6054-4f9b-998f-35059f5a12d6-kube-api-access-n4lkb\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.643627 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9p9dm"] Jan 26 15:55:22 crc kubenswrapper[4713]: W0126 15:55:22.647748 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad281772_79dc_459e_8bae_83588fde23c6.slice/crio-5bde3b099b935cf547b763511a38f4739fa428328eeab64d96942d2a2264e752 WatchSource:0}: Error finding container 5bde3b099b935cf547b763511a38f4739fa428328eeab64d96942d2a2264e752: Status 404 returned error can't find the container with id 5bde3b099b935cf547b763511a38f4739fa428328eeab64d96942d2a2264e752 Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.675688 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" event={"ID":"7cec4b59-6054-4f9b-998f-35059f5a12d6","Type":"ContainerDied","Data":"325d909c4bf0dd4d0950e861b575a022ac6f1f0e27ea9ff0a8b3ac87c62c4f9b"} Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.675774 4713 scope.go:117] "RemoveContainer" containerID="38d64878c171696ca10230af8df3d35da9cf890b721c59982be9ad670b260434" Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.676002 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-q9fhm" Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.682119 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3aa01a31-895a-4fcd-845b-264c0cec88de","Type":"ContainerStarted","Data":"c21663907943fbce716d381e6335a7bbe8861712113d9f9807f174b6792fd8c7"} Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.684486 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"1dd8f8c60f3800be4319ee03e8085d736c6da3658894964bb7a2a32f4459ca54"} Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.686211 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9p9dm" event={"ID":"ad281772-79dc-459e-8bae-83588fde23c6","Type":"ContainerStarted","Data":"5bde3b099b935cf547b763511a38f4739fa428328eeab64d96942d2a2264e752"} Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.727763 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-q9fhm"] Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.735035 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-q9fhm"] Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.937745 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-c9tvd" podUID="518d38d7-b30e-4d67-a3d7-456e26fc9869" containerName="ovn-controller" probeResult="failure" output=< Jan 26 15:55:22 crc kubenswrapper[4713]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 15:55:22 crc kubenswrapper[4713]: > Jan 26 15:55:22 crc kubenswrapper[4713]: I0126 15:55:22.999892 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.006572 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rl7z9" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.255947 4713 scope.go:117] "RemoveContainer" containerID="6ed5a70e1bffd7c036cec773bfa3b7318ac55a83e8e997843decb29ead7e36e6" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.405408 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c9tvd-config-sxx89"] Jan 26 15:55:23 crc kubenswrapper[4713]: E0126 15:55:23.405935 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cec4b59-6054-4f9b-998f-35059f5a12d6" containerName="dnsmasq-dns" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.405962 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cec4b59-6054-4f9b-998f-35059f5a12d6" containerName="dnsmasq-dns" Jan 26 15:55:23 crc kubenswrapper[4713]: E0126 15:55:23.405991 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cec4b59-6054-4f9b-998f-35059f5a12d6" containerName="init" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.406001 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cec4b59-6054-4f9b-998f-35059f5a12d6" containerName="init" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.406209 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cec4b59-6054-4f9b-998f-35059f5a12d6" containerName="dnsmasq-dns" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.407953 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.409970 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.440438 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9tvd-config-sxx89"] Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.532942 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-run\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.533035 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7mjz\" (UniqueName: \"kubernetes.io/projected/f97e2303-b568-4580-83bc-3ee8e4386fb8-kube-api-access-r7mjz\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.533158 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f97e2303-b568-4580-83bc-3ee8e4386fb8-scripts\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.533394 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f97e2303-b568-4580-83bc-3ee8e4386fb8-additional-scripts\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.533457 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-run-ovn\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.533632 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-log-ovn\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.635251 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f97e2303-b568-4580-83bc-3ee8e4386fb8-additional-scripts\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.635323 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-run-ovn\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.635392 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-log-ovn\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.635451 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-run\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.635482 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7mjz\" (UniqueName: \"kubernetes.io/projected/f97e2303-b568-4580-83bc-3ee8e4386fb8-kube-api-access-r7mjz\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.635504 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f97e2303-b568-4580-83bc-3ee8e4386fb8-scripts\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.635681 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-run-ovn\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.635697 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-log-ovn\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.635697 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-run\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.636415 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f97e2303-b568-4580-83bc-3ee8e4386fb8-additional-scripts\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.638433 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f97e2303-b568-4580-83bc-3ee8e4386fb8-scripts\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.654833 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7mjz\" (UniqueName: \"kubernetes.io/projected/f97e2303-b568-4580-83bc-3ee8e4386fb8-kube-api-access-r7mjz\") pod \"ovn-controller-c9tvd-config-sxx89\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.814199 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:23 crc kubenswrapper[4713]: I0126 15:55:23.814655 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cec4b59-6054-4f9b-998f-35059f5a12d6" path="/var/lib/kubelet/pods/7cec4b59-6054-4f9b-998f-35059f5a12d6/volumes" Jan 26 15:55:24 crc kubenswrapper[4713]: I0126 15:55:24.319510 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9tvd-config-sxx89"] Jan 26 15:55:24 crc kubenswrapper[4713]: W0126 15:55:24.325716 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf97e2303_b568_4580_83bc_3ee8e4386fb8.slice/crio-d6ead3c13a2dc60bc5a521b5389fbcfdb350eda01bc27a257af3dc60797440f2 WatchSource:0}: Error finding container d6ead3c13a2dc60bc5a521b5389fbcfdb350eda01bc27a257af3dc60797440f2: Status 404 returned error can't find the container with id d6ead3c13a2dc60bc5a521b5389fbcfdb350eda01bc27a257af3dc60797440f2 Jan 26 15:55:24 crc kubenswrapper[4713]: I0126 15:55:24.718021 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5m6jp" event={"ID":"d1f04dc7-c644-4c8a-ac31-721292a6874d","Type":"ContainerStarted","Data":"b3d95c2a686b2cf19fb37d4d134cd8c2b4059bacd4b7bd57d7c26b2b20f8d38c"} Jan 26 15:55:24 crc kubenswrapper[4713]: I0126 15:55:24.721073 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9tvd-config-sxx89" event={"ID":"f97e2303-b568-4580-83bc-3ee8e4386fb8","Type":"ContainerStarted","Data":"2c70e016d1bc30e3060cc7d05817c6a625a971d666abcb4e8ac2dd9715da0525"} Jan 26 15:55:24 crc kubenswrapper[4713]: I0126 15:55:24.721125 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9tvd-config-sxx89" event={"ID":"f97e2303-b568-4580-83bc-3ee8e4386fb8","Type":"ContainerStarted","Data":"d6ead3c13a2dc60bc5a521b5389fbcfdb350eda01bc27a257af3dc60797440f2"} Jan 26 15:55:24 crc kubenswrapper[4713]: I0126 15:55:24.723700 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9p9dm" event={"ID":"ad281772-79dc-459e-8bae-83588fde23c6","Type":"ContainerStarted","Data":"a910425c6e3b6c49990aceb6d5ab231979abd0c63f0f92331b353feafd50e5d9"} Jan 26 15:55:24 crc kubenswrapper[4713]: I0126 15:55:24.743633 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-5m6jp" podStartSLOduration=4.187953837 podStartE2EDuration="11.743608665s" podCreationTimestamp="2026-01-26 15:55:13 +0000 UTC" firstStartedPulling="2026-01-26 15:55:14.526508298 +0000 UTC m=+1289.663525533" lastFinishedPulling="2026-01-26 15:55:22.082163126 +0000 UTC m=+1297.219180361" observedRunningTime="2026-01-26 15:55:24.735177991 +0000 UTC m=+1299.872195236" watchObservedRunningTime="2026-01-26 15:55:24.743608665 +0000 UTC m=+1299.880625900" Jan 26 15:55:24 crc kubenswrapper[4713]: I0126 15:55:24.779132 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-c9tvd-config-sxx89" podStartSLOduration=1.779108763 podStartE2EDuration="1.779108763s" podCreationTimestamp="2026-01-26 15:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:24.754238211 +0000 UTC m=+1299.891255466" watchObservedRunningTime="2026-01-26 15:55:24.779108763 +0000 UTC m=+1299.916125998" Jan 26 15:55:24 crc kubenswrapper[4713]: I0126 15:55:24.784632 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="a45d2a2d-be1b-476e-8fbf-f9bdd5a97301" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 15:55:24 crc kubenswrapper[4713]: I0126 15:55:24.797439 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-9p9dm" podStartSLOduration=5.79734266 podStartE2EDuration="5.79734266s" podCreationTimestamp="2026-01-26 15:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:24.77361483 +0000 UTC m=+1299.910632065" watchObservedRunningTime="2026-01-26 15:55:24.79734266 +0000 UTC m=+1299.934359885" Jan 26 15:55:25 crc kubenswrapper[4713]: I0126 15:55:25.734536 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3aa01a31-895a-4fcd-845b-264c0cec88de","Type":"ContainerStarted","Data":"b780d193cd25ed80a7ce6cd34cfd698dca70d1e948376d602e13665323f69010"} Jan 26 15:55:25 crc kubenswrapper[4713]: I0126 15:55:25.746988 4713 generic.go:334] "Generic (PLEG): container finished" podID="f97e2303-b568-4580-83bc-3ee8e4386fb8" containerID="2c70e016d1bc30e3060cc7d05817c6a625a971d666abcb4e8ac2dd9715da0525" exitCode=0 Jan 26 15:55:25 crc kubenswrapper[4713]: I0126 15:55:25.747074 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9tvd-config-sxx89" event={"ID":"f97e2303-b568-4580-83bc-3ee8e4386fb8","Type":"ContainerDied","Data":"2c70e016d1bc30e3060cc7d05817c6a625a971d666abcb4e8ac2dd9715da0525"} Jan 26 15:55:25 crc kubenswrapper[4713]: I0126 15:55:25.751083 4713 generic.go:334] "Generic (PLEG): container finished" podID="ad281772-79dc-459e-8bae-83588fde23c6" containerID="a910425c6e3b6c49990aceb6d5ab231979abd0c63f0f92331b353feafd50e5d9" exitCode=0 Jan 26 15:55:25 crc kubenswrapper[4713]: I0126 15:55:25.751137 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9p9dm" event={"ID":"ad281772-79dc-459e-8bae-83588fde23c6","Type":"ContainerDied","Data":"a910425c6e3b6c49990aceb6d5ab231979abd0c63f0f92331b353feafd50e5d9"} Jan 26 15:55:27 crc kubenswrapper[4713]: I0126 15:55:27.929089 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-c9tvd" Jan 26 15:55:29 crc kubenswrapper[4713]: I0126 15:55:29.241026 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 26 15:55:31 crc kubenswrapper[4713]: I0126 15:55:31.809551 4713 generic.go:334] "Generic (PLEG): container finished" podID="3aa01a31-895a-4fcd-845b-264c0cec88de" containerID="b780d193cd25ed80a7ce6cd34cfd698dca70d1e948376d602e13665323f69010" exitCode=0 Jan 26 15:55:31 crc kubenswrapper[4713]: I0126 15:55:31.811914 4713 generic.go:334] "Generic (PLEG): container finished" podID="d1f04dc7-c644-4c8a-ac31-721292a6874d" containerID="b3d95c2a686b2cf19fb37d4d134cd8c2b4059bacd4b7bd57d7c26b2b20f8d38c" exitCode=0 Jan 26 15:55:31 crc kubenswrapper[4713]: I0126 15:55:31.816945 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3aa01a31-895a-4fcd-845b-264c0cec88de","Type":"ContainerDied","Data":"b780d193cd25ed80a7ce6cd34cfd698dca70d1e948376d602e13665323f69010"} Jan 26 15:55:31 crc kubenswrapper[4713]: I0126 15:55:31.816990 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5m6jp" event={"ID":"d1f04dc7-c644-4c8a-ac31-721292a6874d","Type":"ContainerDied","Data":"b3d95c2a686b2cf19fb37d4d134cd8c2b4059bacd4b7bd57d7c26b2b20f8d38c"} Jan 26 15:55:33 crc kubenswrapper[4713]: E0126 15:55:33.198798 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 26 15:55:33 crc kubenswrapper[4713]: E0126 15:55:33.199690 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8626j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-xfq6j_openstack(67bee733-1013-44d9-ac74-5ce552dbb606): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:55:33 crc kubenswrapper[4713]: E0126 15:55:33.200823 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-xfq6j" podUID="67bee733-1013-44d9-ac74-5ce552dbb606" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.253893 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9p9dm" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.259744 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.441112 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-run\") pod \"f97e2303-b568-4580-83bc-3ee8e4386fb8\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.441229 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-log-ovn\") pod \"f97e2303-b568-4580-83bc-3ee8e4386fb8\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.441266 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-run-ovn\") pod \"f97e2303-b568-4580-83bc-3ee8e4386fb8\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.441250 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-run" (OuterVolumeSpecName: "var-run") pod "f97e2303-b568-4580-83bc-3ee8e4386fb8" (UID: "f97e2303-b568-4580-83bc-3ee8e4386fb8"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.441306 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7mjz\" (UniqueName: \"kubernetes.io/projected/f97e2303-b568-4580-83bc-3ee8e4386fb8-kube-api-access-r7mjz\") pod \"f97e2303-b568-4580-83bc-3ee8e4386fb8\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.441319 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f97e2303-b568-4580-83bc-3ee8e4386fb8" (UID: "f97e2303-b568-4580-83bc-3ee8e4386fb8"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.441339 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f97e2303-b568-4580-83bc-3ee8e4386fb8" (UID: "f97e2303-b568-4580-83bc-3ee8e4386fb8"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.441354 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad281772-79dc-459e-8bae-83588fde23c6-operator-scripts\") pod \"ad281772-79dc-459e-8bae-83588fde23c6\" (UID: \"ad281772-79dc-459e-8bae-83588fde23c6\") " Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.441459 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f97e2303-b568-4580-83bc-3ee8e4386fb8-scripts\") pod \"f97e2303-b568-4580-83bc-3ee8e4386fb8\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.441524 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzdfr\" (UniqueName: \"kubernetes.io/projected/ad281772-79dc-459e-8bae-83588fde23c6-kube-api-access-wzdfr\") pod \"ad281772-79dc-459e-8bae-83588fde23c6\" (UID: \"ad281772-79dc-459e-8bae-83588fde23c6\") " Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.441629 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f97e2303-b568-4580-83bc-3ee8e4386fb8-additional-scripts\") pod \"f97e2303-b568-4580-83bc-3ee8e4386fb8\" (UID: \"f97e2303-b568-4580-83bc-3ee8e4386fb8\") " Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.442147 4713 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.442172 4713 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.442183 4713 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f97e2303-b568-4580-83bc-3ee8e4386fb8-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.442976 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97e2303-b568-4580-83bc-3ee8e4386fb8-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f97e2303-b568-4580-83bc-3ee8e4386fb8" (UID: "f97e2303-b568-4580-83bc-3ee8e4386fb8"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.442979 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad281772-79dc-459e-8bae-83588fde23c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad281772-79dc-459e-8bae-83588fde23c6" (UID: "ad281772-79dc-459e-8bae-83588fde23c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.443240 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97e2303-b568-4580-83bc-3ee8e4386fb8-scripts" (OuterVolumeSpecName: "scripts") pod "f97e2303-b568-4580-83bc-3ee8e4386fb8" (UID: "f97e2303-b568-4580-83bc-3ee8e4386fb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.446956 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad281772-79dc-459e-8bae-83588fde23c6-kube-api-access-wzdfr" (OuterVolumeSpecName: "kube-api-access-wzdfr") pod "ad281772-79dc-459e-8bae-83588fde23c6" (UID: "ad281772-79dc-459e-8bae-83588fde23c6"). InnerVolumeSpecName "kube-api-access-wzdfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.449341 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97e2303-b568-4580-83bc-3ee8e4386fb8-kube-api-access-r7mjz" (OuterVolumeSpecName: "kube-api-access-r7mjz") pod "f97e2303-b568-4580-83bc-3ee8e4386fb8" (UID: "f97e2303-b568-4580-83bc-3ee8e4386fb8"). InnerVolumeSpecName "kube-api-access-r7mjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.544837 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7mjz\" (UniqueName: \"kubernetes.io/projected/f97e2303-b568-4580-83bc-3ee8e4386fb8-kube-api-access-r7mjz\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.544910 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad281772-79dc-459e-8bae-83588fde23c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.544927 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f97e2303-b568-4580-83bc-3ee8e4386fb8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.544943 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzdfr\" (UniqueName: \"kubernetes.io/projected/ad281772-79dc-459e-8bae-83588fde23c6-kube-api-access-wzdfr\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.544957 4713 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f97e2303-b568-4580-83bc-3ee8e4386fb8-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.629908 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5m6jp" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.750763 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f04dc7-c644-4c8a-ac31-721292a6874d-config-data\") pod \"d1f04dc7-c644-4c8a-ac31-721292a6874d\" (UID: \"d1f04dc7-c644-4c8a-ac31-721292a6874d\") " Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.751251 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkrc6\" (UniqueName: \"kubernetes.io/projected/d1f04dc7-c644-4c8a-ac31-721292a6874d-kube-api-access-hkrc6\") pod \"d1f04dc7-c644-4c8a-ac31-721292a6874d\" (UID: \"d1f04dc7-c644-4c8a-ac31-721292a6874d\") " Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.751438 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f04dc7-c644-4c8a-ac31-721292a6874d-combined-ca-bundle\") pod \"d1f04dc7-c644-4c8a-ac31-721292a6874d\" (UID: \"d1f04dc7-c644-4c8a-ac31-721292a6874d\") " Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.756889 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f04dc7-c644-4c8a-ac31-721292a6874d-kube-api-access-hkrc6" (OuterVolumeSpecName: "kube-api-access-hkrc6") pod "d1f04dc7-c644-4c8a-ac31-721292a6874d" (UID: "d1f04dc7-c644-4c8a-ac31-721292a6874d"). InnerVolumeSpecName "kube-api-access-hkrc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.784592 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f04dc7-c644-4c8a-ac31-721292a6874d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1f04dc7-c644-4c8a-ac31-721292a6874d" (UID: "d1f04dc7-c644-4c8a-ac31-721292a6874d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.831801 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f04dc7-c644-4c8a-ac31-721292a6874d-config-data" (OuterVolumeSpecName: "config-data") pod "d1f04dc7-c644-4c8a-ac31-721292a6874d" (UID: "d1f04dc7-c644-4c8a-ac31-721292a6874d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.836777 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"8e031a379aedfee1ed0ec4512ca1ebc3cb579bbd6dd22d5518d245af48d15dea"} Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.838592 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5m6jp" event={"ID":"d1f04dc7-c644-4c8a-ac31-721292a6874d","Type":"ContainerDied","Data":"ec0df6b5451d4e7bfb145ca4e5393b4c68ef4035e1cd2afeacd70f8a8edf25c8"} Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.838616 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5m6jp" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.838646 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec0df6b5451d4e7bfb145ca4e5393b4c68ef4035e1cd2afeacd70f8a8edf25c8" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.844070 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9tvd-config-sxx89" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.844080 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9tvd-config-sxx89" event={"ID":"f97e2303-b568-4580-83bc-3ee8e4386fb8","Type":"ContainerDied","Data":"d6ead3c13a2dc60bc5a521b5389fbcfdb350eda01bc27a257af3dc60797440f2"} Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.844112 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6ead3c13a2dc60bc5a521b5389fbcfdb350eda01bc27a257af3dc60797440f2" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.845929 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9p9dm" event={"ID":"ad281772-79dc-459e-8bae-83588fde23c6","Type":"ContainerDied","Data":"5bde3b099b935cf547b763511a38f4739fa428328eeab64d96942d2a2264e752"} Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.845957 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bde3b099b935cf547b763511a38f4739fa428328eeab64d96942d2a2264e752" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.845935 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9p9dm" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.852292 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3aa01a31-895a-4fcd-845b-264c0cec88de","Type":"ContainerStarted","Data":"e2a2ec71a6747644c86a0d3ffb0f8ff2e9da05ade8a0c4c4bbff955db9980c8e"} Jan 26 15:55:33 crc kubenswrapper[4713]: E0126 15:55:33.853183 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-xfq6j" podUID="67bee733-1013-44d9-ac74-5ce552dbb606" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.853905 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f04dc7-c644-4c8a-ac31-721292a6874d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.853930 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f04dc7-c644-4c8a-ac31-721292a6874d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:33 crc kubenswrapper[4713]: I0126 15:55:33.853940 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkrc6\" (UniqueName: \"kubernetes.io/projected/d1f04dc7-c644-4c8a-ac31-721292a6874d-kube-api-access-hkrc6\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.113511 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-r2ngk"] Jan 26 15:55:34 crc kubenswrapper[4713]: E0126 15:55:34.113907 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f04dc7-c644-4c8a-ac31-721292a6874d" containerName="keystone-db-sync" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.113922 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f04dc7-c644-4c8a-ac31-721292a6874d" containerName="keystone-db-sync" Jan 26 15:55:34 crc kubenswrapper[4713]: E0126 15:55:34.113934 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97e2303-b568-4580-83bc-3ee8e4386fb8" containerName="ovn-config" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.113944 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97e2303-b568-4580-83bc-3ee8e4386fb8" containerName="ovn-config" Jan 26 15:55:34 crc kubenswrapper[4713]: E0126 15:55:34.113953 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad281772-79dc-459e-8bae-83588fde23c6" containerName="mariadb-account-create-update" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.113959 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad281772-79dc-459e-8bae-83588fde23c6" containerName="mariadb-account-create-update" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.114113 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f04dc7-c644-4c8a-ac31-721292a6874d" containerName="keystone-db-sync" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.114122 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad281772-79dc-459e-8bae-83588fde23c6" containerName="mariadb-account-create-update" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.114132 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97e2303-b568-4580-83bc-3ee8e4386fb8" containerName="ovn-config" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.114770 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.123651 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.123918 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.123726 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2wsv8" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.123782 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.124638 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.164707 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-h2zhf"] Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.166461 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.233947 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-h2zhf"] Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.263346 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-combined-ca-bundle\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.263684 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-credential-keys\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.263708 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrz6n\" (UniqueName: \"kubernetes.io/projected/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-kube-api-access-vrz6n\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.263735 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-fernet-keys\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.263752 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-h2zhf\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.263771 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-scripts\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.263807 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-config-data\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.263824 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-dns-svc\") pod \"dnsmasq-dns-f877ddd87-h2zhf\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.263865 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-h2zhf\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.263910 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-config\") pod \"dnsmasq-dns-f877ddd87-h2zhf\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.263931 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b44k\" (UniqueName: \"kubernetes.io/projected/005caa44-e394-42fb-9ee5-8f98f6289180-kube-api-access-9b44k\") pod \"dnsmasq-dns-f877ddd87-h2zhf\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.264053 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r2ngk"] Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.368648 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-h2zhf\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.368801 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-config\") pod \"dnsmasq-dns-f877ddd87-h2zhf\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.368843 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b44k\" (UniqueName: \"kubernetes.io/projected/005caa44-e394-42fb-9ee5-8f98f6289180-kube-api-access-9b44k\") pod \"dnsmasq-dns-f877ddd87-h2zhf\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.368929 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-combined-ca-bundle\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.368987 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-credential-keys\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.369022 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrz6n\" (UniqueName: \"kubernetes.io/projected/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-kube-api-access-vrz6n\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.369059 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-h2zhf\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.369082 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-fernet-keys\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.369126 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-scripts\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.369203 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-config-data\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.369241 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-dns-svc\") pod \"dnsmasq-dns-f877ddd87-h2zhf\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.370154 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-h2zhf\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.371064 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-config\") pod \"dnsmasq-dns-f877ddd87-h2zhf\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.371197 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-h2zhf\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.378309 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-scripts\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.383015 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-fernet-keys\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.383911 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-combined-ca-bundle\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.386355 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-credential-keys\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.391081 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-dns-svc\") pod \"dnsmasq-dns-f877ddd87-h2zhf\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.392024 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrz6n\" (UniqueName: \"kubernetes.io/projected/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-kube-api-access-vrz6n\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.420802 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-config-data\") pod \"keystone-bootstrap-r2ngk\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.430401 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b44k\" (UniqueName: \"kubernetes.io/projected/005caa44-e394-42fb-9ee5-8f98f6289180-kube-api-access-9b44k\") pod \"dnsmasq-dns-f877ddd87-h2zhf\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.439874 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.493059 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-zhp42"] Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.494342 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.514655 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.514855 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-kbfj7" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.515023 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.515187 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.525454 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-c9tvd-config-sxx89"] Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.563476 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.589622 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-c9tvd-config-sxx89"] Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.632479 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-zhp42"] Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.678419 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-nkt8b"] Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.707116 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.717332 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4m7t\" (UniqueName: \"kubernetes.io/projected/5c67f072-d970-466d-a3c7-20df7968e5f2-kube-api-access-t4m7t\") pod \"cloudkitty-db-sync-zhp42\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.717395 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5c67f072-d970-466d-a3c7-20df7968e5f2-certs\") pod \"cloudkitty-db-sync-zhp42\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.717436 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-config-data\") pod \"cloudkitty-db-sync-zhp42\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.717542 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-combined-ca-bundle\") pod \"cloudkitty-db-sync-zhp42\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.717584 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-scripts\") pod \"cloudkitty-db-sync-zhp42\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.757091 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.757389 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.767694 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.770039 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.776313 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nkt8b"] Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.795528 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.795780 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.797874 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="a45d2a2d-be1b-476e-8fbf-f9bdd5a97301" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.817165 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.823474 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.823538 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4m7t\" (UniqueName: \"kubernetes.io/projected/5c67f072-d970-466d-a3c7-20df7968e5f2-kube-api-access-t4m7t\") pod \"cloudkitty-db-sync-zhp42\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.823565 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-scripts\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.823587 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5c67f072-d970-466d-a3c7-20df7968e5f2-certs\") pod \"cloudkitty-db-sync-zhp42\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.823612 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-scripts\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.823636 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhwwp\" (UniqueName: \"kubernetes.io/projected/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-kube-api-access-jhwwp\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.823661 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.823693 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-config-data\") pod \"cloudkitty-db-sync-zhp42\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.823719 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-etc-machine-id\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.823746 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-combined-ca-bundle\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.823827 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc2hn\" (UniqueName: \"kubernetes.io/projected/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-kube-api-access-lc2hn\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.823870 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-log-httpd\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.823913 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-config-data\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.823953 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-combined-ca-bundle\") pod \"cloudkitty-db-sync-zhp42\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.824020 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-db-sync-config-data\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.824171 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-scripts\") pod \"cloudkitty-db-sync-zhp42\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.824211 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-config-data\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.824242 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-run-httpd\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.826847 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7nxks" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.859559 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5c67f072-d970-466d-a3c7-20df7968e5f2-certs\") pod \"cloudkitty-db-sync-zhp42\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.862904 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-scripts\") pod \"cloudkitty-db-sync-zhp42\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.864035 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-combined-ca-bundle\") pod \"cloudkitty-db-sync-zhp42\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.872914 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-config-data\") pod \"cloudkitty-db-sync-zhp42\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.899442 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9mb6k"] Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.901007 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9mb6k" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.904879 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.905229 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d4586" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.905482 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.916184 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4m7t\" (UniqueName: \"kubernetes.io/projected/5c67f072-d970-466d-a3c7-20df7968e5f2-kube-api-access-t4m7t\") pod \"cloudkitty-db-sync-zhp42\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.916647 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.942919 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-config-data\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.942972 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-db-sync-config-data\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.943006 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-config-data\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.943026 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-run-httpd\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.943058 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.943086 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-scripts\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.943103 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-scripts\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.943121 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhwwp\" (UniqueName: \"kubernetes.io/projected/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-kube-api-access-jhwwp\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.943138 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.943159 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-etc-machine-id\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.943173 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-combined-ca-bundle\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.943225 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc2hn\" (UniqueName: \"kubernetes.io/projected/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-kube-api-access-lc2hn\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.943248 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-log-httpd\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.943825 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-log-httpd\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.950444 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-etc-machine-id\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.952157 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-run-httpd\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.968065 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-hhmsm"] Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.970394 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.973576 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-config-data\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.973961 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.974147 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ncjrr" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.975478 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.987171 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-scripts\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.987625 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.991174 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.991896 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhwwp\" (UniqueName: \"kubernetes.io/projected/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-kube-api-access-jhwwp\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:34 crc kubenswrapper[4713]: I0126 15:55:34.992218 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-scripts\") pod \"ceilometer-0\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " pod="openstack/ceilometer-0" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.008181 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc2hn\" (UniqueName: \"kubernetes.io/projected/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-kube-api-access-lc2hn\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.020755 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-config-data\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.027270 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.048063 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/483861ab-4f8a-485a-91f2-ad78944b7124-logs\") pod \"placement-db-sync-hhmsm\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.048145 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e096468-d163-47e3-b23a-be3b1e15d844-config\") pod \"neutron-db-sync-9mb6k\" (UID: \"4e096468-d163-47e3-b23a-be3b1e15d844\") " pod="openstack/neutron-db-sync-9mb6k" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.048217 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrj92\" (UniqueName: \"kubernetes.io/projected/483861ab-4f8a-485a-91f2-ad78944b7124-kube-api-access-jrj92\") pod \"placement-db-sync-hhmsm\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.048243 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-combined-ca-bundle\") pod \"placement-db-sync-hhmsm\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.048278 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk8gw\" (UniqueName: \"kubernetes.io/projected/4e096468-d163-47e3-b23a-be3b1e15d844-kube-api-access-rk8gw\") pod \"neutron-db-sync-9mb6k\" (UID: \"4e096468-d163-47e3-b23a-be3b1e15d844\") " pod="openstack/neutron-db-sync-9mb6k" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.048349 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-config-data\") pod \"placement-db-sync-hhmsm\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.048394 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-scripts\") pod \"placement-db-sync-hhmsm\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.048418 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e096468-d163-47e3-b23a-be3b1e15d844-combined-ca-bundle\") pod \"neutron-db-sync-9mb6k\" (UID: \"4e096468-d163-47e3-b23a-be3b1e15d844\") " pod="openstack/neutron-db-sync-9mb6k" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.053731 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-combined-ca-bundle\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.073429 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"71c01f33ada42800c41cd933487e4ccf52b617025658cd8b76d506d214a07635"} Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.073485 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"d12e9c50f5993239ca91213704f058726e4439f036f0c648ef13ce814b26d44f"} Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.073461 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-db-sync-config-data\") pod \"cinder-db-sync-nkt8b\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.110438 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-h2zhf"] Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.134358 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hhmsm"] Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.146512 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-h5gr4"] Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.147811 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5gr4" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.151144 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-scripts\") pod \"placement-db-sync-hhmsm\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.151174 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e096468-d163-47e3-b23a-be3b1e15d844-combined-ca-bundle\") pod \"neutron-db-sync-9mb6k\" (UID: \"4e096468-d163-47e3-b23a-be3b1e15d844\") " pod="openstack/neutron-db-sync-9mb6k" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.151220 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/483861ab-4f8a-485a-91f2-ad78944b7124-logs\") pod \"placement-db-sync-hhmsm\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.151254 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e096468-d163-47e3-b23a-be3b1e15d844-config\") pod \"neutron-db-sync-9mb6k\" (UID: \"4e096468-d163-47e3-b23a-be3b1e15d844\") " pod="openstack/neutron-db-sync-9mb6k" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.151322 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrj92\" (UniqueName: \"kubernetes.io/projected/483861ab-4f8a-485a-91f2-ad78944b7124-kube-api-access-jrj92\") pod \"placement-db-sync-hhmsm\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.151342 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-combined-ca-bundle\") pod \"placement-db-sync-hhmsm\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.151480 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk8gw\" (UniqueName: \"kubernetes.io/projected/4e096468-d163-47e3-b23a-be3b1e15d844-kube-api-access-rk8gw\") pod \"neutron-db-sync-9mb6k\" (UID: \"4e096468-d163-47e3-b23a-be3b1e15d844\") " pod="openstack/neutron-db-sync-9mb6k" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.151546 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-config-data\") pod \"placement-db-sync-hhmsm\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.151971 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-gzg5n" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.152221 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.153275 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/483861ab-4f8a-485a-91f2-ad78944b7124-logs\") pod \"placement-db-sync-hhmsm\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.176033 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-combined-ca-bundle\") pod \"placement-db-sync-hhmsm\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.176584 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-scripts\") pod \"placement-db-sync-hhmsm\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.177146 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-config-data\") pod \"placement-db-sync-hhmsm\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.179973 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e096468-d163-47e3-b23a-be3b1e15d844-config\") pod \"neutron-db-sync-9mb6k\" (UID: \"4e096468-d163-47e3-b23a-be3b1e15d844\") " pod="openstack/neutron-db-sync-9mb6k" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.180816 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e096468-d163-47e3-b23a-be3b1e15d844-combined-ca-bundle\") pod \"neutron-db-sync-9mb6k\" (UID: \"4e096468-d163-47e3-b23a-be3b1e15d844\") " pod="openstack/neutron-db-sync-9mb6k" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.205466 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9mb6k"] Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.213797 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-zwlt9"] Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.215686 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.236470 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h5gr4"] Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.253050 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a3c9534-c956-4a61-a9fe-73026809a2bb-db-sync-config-data\") pod \"barbican-db-sync-h5gr4\" (UID: \"1a3c9534-c956-4a61-a9fe-73026809a2bb\") " pod="openstack/barbican-db-sync-h5gr4" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.253125 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqxr6\" (UniqueName: \"kubernetes.io/projected/1a3c9534-c956-4a61-a9fe-73026809a2bb-kube-api-access-rqxr6\") pod \"barbican-db-sync-h5gr4\" (UID: \"1a3c9534-c956-4a61-a9fe-73026809a2bb\") " pod="openstack/barbican-db-sync-h5gr4" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.253210 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a3c9534-c956-4a61-a9fe-73026809a2bb-combined-ca-bundle\") pod \"barbican-db-sync-h5gr4\" (UID: \"1a3c9534-c956-4a61-a9fe-73026809a2bb\") " pod="openstack/barbican-db-sync-h5gr4" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.261310 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.265861 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-zwlt9"] Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.293960 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk8gw\" (UniqueName: \"kubernetes.io/projected/4e096468-d163-47e3-b23a-be3b1e15d844-kube-api-access-rk8gw\") pod \"neutron-db-sync-9mb6k\" (UID: \"4e096468-d163-47e3-b23a-be3b1e15d844\") " pod="openstack/neutron-db-sync-9mb6k" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.294190 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrj92\" (UniqueName: \"kubernetes.io/projected/483861ab-4f8a-485a-91f2-ad78944b7124-kube-api-access-jrj92\") pod \"placement-db-sync-hhmsm\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.342307 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9mb6k" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.356609 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a3c9534-c956-4a61-a9fe-73026809a2bb-db-sync-config-data\") pod \"barbican-db-sync-h5gr4\" (UID: \"1a3c9534-c956-4a61-a9fe-73026809a2bb\") " pod="openstack/barbican-db-sync-h5gr4" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.356669 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2w49\" (UniqueName: \"kubernetes.io/projected/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-kube-api-access-n2w49\") pod \"dnsmasq-dns-68dcc9cf6f-zwlt9\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.356694 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-zwlt9\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.356744 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-zwlt9\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.356767 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqxr6\" (UniqueName: \"kubernetes.io/projected/1a3c9534-c956-4a61-a9fe-73026809a2bb-kube-api-access-rqxr6\") pod \"barbican-db-sync-h5gr4\" (UID: \"1a3c9534-c956-4a61-a9fe-73026809a2bb\") " pod="openstack/barbican-db-sync-h5gr4" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.356827 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-zwlt9\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.356852 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-config\") pod \"dnsmasq-dns-68dcc9cf6f-zwlt9\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.356871 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a3c9534-c956-4a61-a9fe-73026809a2bb-combined-ca-bundle\") pod \"barbican-db-sync-h5gr4\" (UID: \"1a3c9534-c956-4a61-a9fe-73026809a2bb\") " pod="openstack/barbican-db-sync-h5gr4" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.370869 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a3c9534-c956-4a61-a9fe-73026809a2bb-combined-ca-bundle\") pod \"barbican-db-sync-h5gr4\" (UID: \"1a3c9534-c956-4a61-a9fe-73026809a2bb\") " pod="openstack/barbican-db-sync-h5gr4" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.379909 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a3c9534-c956-4a61-a9fe-73026809a2bb-db-sync-config-data\") pod \"barbican-db-sync-h5gr4\" (UID: \"1a3c9534-c956-4a61-a9fe-73026809a2bb\") " pod="openstack/barbican-db-sync-h5gr4" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.432879 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hhmsm" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.463072 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2w49\" (UniqueName: \"kubernetes.io/projected/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-kube-api-access-n2w49\") pod \"dnsmasq-dns-68dcc9cf6f-zwlt9\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.464173 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-zwlt9\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.466208 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-zwlt9\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.466944 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-zwlt9\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.473434 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-zwlt9\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.473766 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-zwlt9\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.473830 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-config\") pod \"dnsmasq-dns-68dcc9cf6f-zwlt9\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.478550 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-zwlt9\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.479091 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-config\") pod \"dnsmasq-dns-68dcc9cf6f-zwlt9\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.486030 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqxr6\" (UniqueName: \"kubernetes.io/projected/1a3c9534-c956-4a61-a9fe-73026809a2bb-kube-api-access-rqxr6\") pod \"barbican-db-sync-h5gr4\" (UID: \"1a3c9534-c956-4a61-a9fe-73026809a2bb\") " pod="openstack/barbican-db-sync-h5gr4" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.524391 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2w49\" (UniqueName: \"kubernetes.io/projected/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-kube-api-access-n2w49\") pod \"dnsmasq-dns-68dcc9cf6f-zwlt9\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.673244 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r2ngk"] Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.780235 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5gr4" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.792362 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.798485 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-h2zhf"] Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.846680 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f97e2303-b568-4580-83bc-3ee8e4386fb8" path="/var/lib/kubelet/pods/f97e2303-b568-4580-83bc-3ee8e4386fb8/volumes" Jan 26 15:55:35 crc kubenswrapper[4713]: I0126 15:55:35.983029 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-zhp42"] Jan 26 15:55:36 crc kubenswrapper[4713]: I0126 15:55:36.101764 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:55:36 crc kubenswrapper[4713]: I0126 15:55:36.119856 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-zhp42" event={"ID":"5c67f072-d970-466d-a3c7-20df7968e5f2","Type":"ContainerStarted","Data":"3e846b3330a0a6cdc527f167f350729e77b91a2b3ca9f9deef8c83cc770550c0"} Jan 26 15:55:36 crc kubenswrapper[4713]: I0126 15:55:36.137564 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"1f874b6a9cabb2e87bfbdaca11e5712bbe3a41997cf297ebf265a8795cba0630"} Jan 26 15:55:36 crc kubenswrapper[4713]: I0126 15:55:36.140478 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" event={"ID":"005caa44-e394-42fb-9ee5-8f98f6289180","Type":"ContainerStarted","Data":"b2e732a5c15037b8bc7897c7ac5a16ad12d4c70f6d4f0a7a068ae86b604541ee"} Jan 26 15:55:36 crc kubenswrapper[4713]: I0126 15:55:36.153113 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r2ngk" event={"ID":"da7f24c3-9e71-41f3-a74e-2ab3daa0efae","Type":"ContainerStarted","Data":"5c1ec225aabe239e7e26642cdb7e7f386c9c8e563b87c6a898fc5c00a263bbd0"} Jan 26 15:55:36 crc kubenswrapper[4713]: I0126 15:55:36.265877 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9mb6k"] Jan 26 15:55:36 crc kubenswrapper[4713]: I0126 15:55:36.297610 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nkt8b"] Jan 26 15:55:36 crc kubenswrapper[4713]: I0126 15:55:36.533744 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hhmsm"] Jan 26 15:55:36 crc kubenswrapper[4713]: I0126 15:55:36.561212 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9p9dm"] Jan 26 15:55:36 crc kubenswrapper[4713]: I0126 15:55:36.573658 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9p9dm"] Jan 26 15:55:36 crc kubenswrapper[4713]: W0126 15:55:36.611349 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod483861ab_4f8a_485a_91f2_ad78944b7124.slice/crio-dff5dc9efcf5f6e767d28391b0df45e9129f169a34844985a20ab53e09a28a02 WatchSource:0}: Error finding container dff5dc9efcf5f6e767d28391b0df45e9129f169a34844985a20ab53e09a28a02: Status 404 returned error can't find the container with id dff5dc9efcf5f6e767d28391b0df45e9129f169a34844985a20ab53e09a28a02 Jan 26 15:55:36 crc kubenswrapper[4713]: I0126 15:55:36.659559 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-zwlt9"] Jan 26 15:55:36 crc kubenswrapper[4713]: I0126 15:55:36.714105 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h5gr4"] Jan 26 15:55:36 crc kubenswrapper[4713]: W0126 15:55:36.762653 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a3c9534_c956_4a61_a9fe_73026809a2bb.slice/crio-5519e337556537ff3127642c960547e7c0e8bf75668d4946be029a3f0294106d WatchSource:0}: Error finding container 5519e337556537ff3127642c960547e7c0e8bf75668d4946be029a3f0294106d: Status 404 returned error can't find the container with id 5519e337556537ff3127642c960547e7c0e8bf75668d4946be029a3f0294106d Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.225419 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3aa01a31-895a-4fcd-845b-264c0cec88de","Type":"ContainerStarted","Data":"30c1ad8f13fee8c6347bc2fc0b2f507d1a77188d6e3b28b86db4b18c49207ba8"} Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.225728 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3aa01a31-895a-4fcd-845b-264c0cec88de","Type":"ContainerStarted","Data":"420735821aac096408f035685d2d44bf476152fec46a141c278bcd3c4858405f"} Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.231626 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hhmsm" event={"ID":"483861ab-4f8a-485a-91f2-ad78944b7124","Type":"ContainerStarted","Data":"dff5dc9efcf5f6e767d28391b0df45e9129f169a34844985a20ab53e09a28a02"} Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.254745 4713 generic.go:334] "Generic (PLEG): container finished" podID="005caa44-e394-42fb-9ee5-8f98f6289180" containerID="da2548e21e1c03923b6f6733da3515c9b014c12e9d38e07e2bda6add2a4ded11" exitCode=0 Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.254958 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" event={"ID":"005caa44-e394-42fb-9ee5-8f98f6289180","Type":"ContainerDied","Data":"da2548e21e1c03923b6f6733da3515c9b014c12e9d38e07e2bda6add2a4ded11"} Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.260093 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" event={"ID":"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb","Type":"ContainerStarted","Data":"b954c4161600132224a2cb89107634452a13db90e3ee3f81c8110a825c8bbe16"} Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.260154 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" event={"ID":"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb","Type":"ContainerStarted","Data":"8e25e75b6d4a329906bbd1684c543d5739de8c362f413b188d752192dda63208"} Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.270155 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r2ngk" event={"ID":"da7f24c3-9e71-41f3-a74e-2ab3daa0efae","Type":"ContainerStarted","Data":"4e8f2f653f219380db269096b53ae8d320a6da06d20fe12f7e3726109ac70c9f"} Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.303157 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.303141711 podStartE2EDuration="20.303141711s" podCreationTimestamp="2026-01-26 15:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:37.293876683 +0000 UTC m=+1312.430893918" watchObservedRunningTime="2026-01-26 15:55:37.303141711 +0000 UTC m=+1312.440158946" Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.303607 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5gr4" event={"ID":"1a3c9534-c956-4a61-a9fe-73026809a2bb","Type":"ContainerStarted","Data":"5519e337556537ff3127642c960547e7c0e8bf75668d4946be029a3f0294106d"} Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.312778 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nkt8b" event={"ID":"c8a35a5b-49a1-45aa-9090-2aab8a4893ce","Type":"ContainerStarted","Data":"63f5d65f2588e8b9fd24482c1ddd8ed4644ad78f455fe6f8e9a989838ea29049"} Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.365335 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9mb6k" event={"ID":"4e096468-d163-47e3-b23a-be3b1e15d844","Type":"ContainerStarted","Data":"5e9c06949e94b0e9ecd98a54170002f932093f73176c8a23cd3413f84fe164c3"} Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.365389 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9mb6k" event={"ID":"4e096468-d163-47e3-b23a-be3b1e15d844","Type":"ContainerStarted","Data":"93ab8e9c1c35a57901a328c6d73acf4f228743ccb6af6f61cea8cfc86e414064"} Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.412056 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.441521 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d","Type":"ContainerStarted","Data":"ee7669c995bb95ec938c2c096582d412cb6ad1393296eeb513e6a242776385c6"} Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.481945 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-r2ngk" podStartSLOduration=3.481927094 podStartE2EDuration="3.481927094s" podCreationTimestamp="2026-01-26 15:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:37.427644064 +0000 UTC m=+1312.564661299" watchObservedRunningTime="2026-01-26 15:55:37.481927094 +0000 UTC m=+1312.618944329" Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.510861 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9mb6k" podStartSLOduration=3.510839538 podStartE2EDuration="3.510839538s" podCreationTimestamp="2026-01-26 15:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:37.482932332 +0000 UTC m=+1312.619949567" watchObservedRunningTime="2026-01-26 15:55:37.510839538 +0000 UTC m=+1312.647856763" Jan 26 15:55:37 crc kubenswrapper[4713]: I0126 15:55:37.888901 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad281772-79dc-459e-8bae-83588fde23c6" path="/var/lib/kubelet/pods/ad281772-79dc-459e-8bae-83588fde23c6/volumes" Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.246439 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.468565 4713 generic.go:334] "Generic (PLEG): container finished" podID="580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb" containerID="b954c4161600132224a2cb89107634452a13db90e3ee3f81c8110a825c8bbe16" exitCode=0 Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.470727 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" event={"ID":"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb","Type":"ContainerDied","Data":"b954c4161600132224a2cb89107634452a13db90e3ee3f81c8110a825c8bbe16"} Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.748026 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.800833 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-ovsdbserver-nb\") pod \"005caa44-e394-42fb-9ee5-8f98f6289180\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.800910 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-dns-svc\") pod \"005caa44-e394-42fb-9ee5-8f98f6289180\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.801191 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b44k\" (UniqueName: \"kubernetes.io/projected/005caa44-e394-42fb-9ee5-8f98f6289180-kube-api-access-9b44k\") pod \"005caa44-e394-42fb-9ee5-8f98f6289180\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.801253 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-config\") pod \"005caa44-e394-42fb-9ee5-8f98f6289180\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.801341 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-ovsdbserver-sb\") pod \"005caa44-e394-42fb-9ee5-8f98f6289180\" (UID: \"005caa44-e394-42fb-9ee5-8f98f6289180\") " Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.821624 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/005caa44-e394-42fb-9ee5-8f98f6289180-kube-api-access-9b44k" (OuterVolumeSpecName: "kube-api-access-9b44k") pod "005caa44-e394-42fb-9ee5-8f98f6289180" (UID: "005caa44-e394-42fb-9ee5-8f98f6289180"). InnerVolumeSpecName "kube-api-access-9b44k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.835754 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "005caa44-e394-42fb-9ee5-8f98f6289180" (UID: "005caa44-e394-42fb-9ee5-8f98f6289180"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.849837 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "005caa44-e394-42fb-9ee5-8f98f6289180" (UID: "005caa44-e394-42fb-9ee5-8f98f6289180"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.865426 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-config" (OuterVolumeSpecName: "config") pod "005caa44-e394-42fb-9ee5-8f98f6289180" (UID: "005caa44-e394-42fb-9ee5-8f98f6289180"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.900729 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "005caa44-e394-42fb-9ee5-8f98f6289180" (UID: "005caa44-e394-42fb-9ee5-8f98f6289180"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.906690 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b44k\" (UniqueName: \"kubernetes.io/projected/005caa44-e394-42fb-9ee5-8f98f6289180-kube-api-access-9b44k\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.906753 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.906791 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.906839 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:38 crc kubenswrapper[4713]: I0126 15:55:38.906854 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/005caa44-e394-42fb-9ee5-8f98f6289180-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:39 crc kubenswrapper[4713]: I0126 15:55:39.482481 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" event={"ID":"005caa44-e394-42fb-9ee5-8f98f6289180","Type":"ContainerDied","Data":"b2e732a5c15037b8bc7897c7ac5a16ad12d4c70f6d4f0a7a068ae86b604541ee"} Jan 26 15:55:39 crc kubenswrapper[4713]: I0126 15:55:39.482537 4713 scope.go:117] "RemoveContainer" containerID="da2548e21e1c03923b6f6733da3515c9b014c12e9d38e07e2bda6add2a4ded11" Jan 26 15:55:39 crc kubenswrapper[4713]: I0126 15:55:39.482656 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-h2zhf" Jan 26 15:55:39 crc kubenswrapper[4713]: I0126 15:55:39.494475 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" event={"ID":"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb","Type":"ContainerStarted","Data":"bfa389dee1f67f72bd6e523abe4fab279ac5b08bf16bb151b68a049fe02a52b3"} Jan 26 15:55:39 crc kubenswrapper[4713]: I0126 15:55:39.497237 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:39 crc kubenswrapper[4713]: I0126 15:55:39.505707 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"2f630992c0223d104801e3bb75d1267c7ba0adf14dc48143f98d107be0c8cb29"} Jan 26 15:55:39 crc kubenswrapper[4713]: I0126 15:55:39.505772 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"cdffabf00a8fa64947a003136911407608438135d2247d5cacb7bcedc309a2fa"} Jan 26 15:55:39 crc kubenswrapper[4713]: I0126 15:55:39.527595 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" podStartSLOduration=5.527576626 podStartE2EDuration="5.527576626s" podCreationTimestamp="2026-01-26 15:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:39.52305328 +0000 UTC m=+1314.660070515" watchObservedRunningTime="2026-01-26 15:55:39.527576626 +0000 UTC m=+1314.664593861" Jan 26 15:55:39 crc kubenswrapper[4713]: I0126 15:55:39.596231 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-h2zhf"] Jan 26 15:55:39 crc kubenswrapper[4713]: I0126 15:55:39.603969 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-h2zhf"] Jan 26 15:55:39 crc kubenswrapper[4713]: I0126 15:55:39.823724 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="005caa44-e394-42fb-9ee5-8f98f6289180" path="/var/lib/kubelet/pods/005caa44-e394-42fb-9ee5-8f98f6289180/volumes" Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.561835 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-btgqg"] Jan 26 15:55:41 crc kubenswrapper[4713]: E0126 15:55:41.562911 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005caa44-e394-42fb-9ee5-8f98f6289180" containerName="init" Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.562927 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="005caa44-e394-42fb-9ee5-8f98f6289180" containerName="init" Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.563145 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="005caa44-e394-42fb-9ee5-8f98f6289180" containerName="init" Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.564004 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btgqg" Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.570957 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.571943 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"c764aa7891cb1b16df2d4616a422423cf9238fc96810c5b927e5c74e13a04072"} Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.571978 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"5f978baac1af30c5e4645070f0e170d6ed8069221edaea06afaaadcd7eb2442d"} Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.609426 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btgqg"] Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.691913 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx9sf\" (UniqueName: \"kubernetes.io/projected/2c00cbec-fd99-4aee-b111-71c6a9d0cacc-kube-api-access-nx9sf\") pod \"root-account-create-update-btgqg\" (UID: \"2c00cbec-fd99-4aee-b111-71c6a9d0cacc\") " pod="openstack/root-account-create-update-btgqg" Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.692100 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c00cbec-fd99-4aee-b111-71c6a9d0cacc-operator-scripts\") pod \"root-account-create-update-btgqg\" (UID: \"2c00cbec-fd99-4aee-b111-71c6a9d0cacc\") " pod="openstack/root-account-create-update-btgqg" Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.795543 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx9sf\" (UniqueName: \"kubernetes.io/projected/2c00cbec-fd99-4aee-b111-71c6a9d0cacc-kube-api-access-nx9sf\") pod \"root-account-create-update-btgqg\" (UID: \"2c00cbec-fd99-4aee-b111-71c6a9d0cacc\") " pod="openstack/root-account-create-update-btgqg" Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.795657 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c00cbec-fd99-4aee-b111-71c6a9d0cacc-operator-scripts\") pod \"root-account-create-update-btgqg\" (UID: \"2c00cbec-fd99-4aee-b111-71c6a9d0cacc\") " pod="openstack/root-account-create-update-btgqg" Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.796391 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c00cbec-fd99-4aee-b111-71c6a9d0cacc-operator-scripts\") pod \"root-account-create-update-btgqg\" (UID: \"2c00cbec-fd99-4aee-b111-71c6a9d0cacc\") " pod="openstack/root-account-create-update-btgqg" Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.817288 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx9sf\" (UniqueName: \"kubernetes.io/projected/2c00cbec-fd99-4aee-b111-71c6a9d0cacc-kube-api-access-nx9sf\") pod \"root-account-create-update-btgqg\" (UID: \"2c00cbec-fd99-4aee-b111-71c6a9d0cacc\") " pod="openstack/root-account-create-update-btgqg" Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.885044 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btgqg" Jan 26 15:55:41 crc kubenswrapper[4713]: I0126 15:55:41.941873 4713 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod8fcf6581-6532-4f68-9a54-01d32dd012cc"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod8fcf6581-6532-4f68-9a54-01d32dd012cc] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8fcf6581_6532_4f68_9a54_01d32dd012cc.slice" Jan 26 15:55:41 crc kubenswrapper[4713]: E0126 15:55:41.941924 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod8fcf6581-6532-4f68-9a54-01d32dd012cc] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod8fcf6581-6532-4f68-9a54-01d32dd012cc] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8fcf6581_6532_4f68_9a54_01d32dd012cc.slice" pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" podUID="8fcf6581-6532-4f68-9a54-01d32dd012cc" Jan 26 15:55:42 crc kubenswrapper[4713]: I0126 15:55:42.592238 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-vbxxx" Jan 26 15:55:42 crc kubenswrapper[4713]: I0126 15:55:42.666039 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-vbxxx"] Jan 26 15:55:42 crc kubenswrapper[4713]: I0126 15:55:42.679203 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-vbxxx"] Jan 26 15:55:43 crc kubenswrapper[4713]: I0126 15:55:43.613509 4713 generic.go:334] "Generic (PLEG): container finished" podID="da7f24c3-9e71-41f3-a74e-2ab3daa0efae" containerID="4e8f2f653f219380db269096b53ae8d320a6da06d20fe12f7e3726109ac70c9f" exitCode=0 Jan 26 15:55:43 crc kubenswrapper[4713]: I0126 15:55:43.613562 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r2ngk" event={"ID":"da7f24c3-9e71-41f3-a74e-2ab3daa0efae","Type":"ContainerDied","Data":"4e8f2f653f219380db269096b53ae8d320a6da06d20fe12f7e3726109ac70c9f"} Jan 26 15:55:43 crc kubenswrapper[4713]: I0126 15:55:43.820317 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fcf6581-6532-4f68-9a54-01d32dd012cc" path="/var/lib/kubelet/pods/8fcf6581-6532-4f68-9a54-01d32dd012cc/volumes" Jan 26 15:55:44 crc kubenswrapper[4713]: I0126 15:55:44.780117 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 26 15:55:45 crc kubenswrapper[4713]: I0126 15:55:45.795077 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:55:45 crc kubenswrapper[4713]: I0126 15:55:45.869515 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-dmh4f"] Jan 26 15:55:45 crc kubenswrapper[4713]: I0126 15:55:45.871842 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-dmh4f" podUID="21c37845-d3f7-4a91-9dc5-e0f8967b5682" containerName="dnsmasq-dns" containerID="cri-o://7e96a4a38d1e12bf4634a02bea712f9005834e6cf2a79cf164f53f2f5ad49ac3" gracePeriod=10 Jan 26 15:55:46 crc kubenswrapper[4713]: E0126 15:55:46.060659 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21c37845_d3f7_4a91_9dc5_e0f8967b5682.slice/crio-conmon-7e96a4a38d1e12bf4634a02bea712f9005834e6cf2a79cf164f53f2f5ad49ac3.scope\": RecentStats: unable to find data in memory cache]" Jan 26 15:55:46 crc kubenswrapper[4713]: I0126 15:55:46.651447 4713 generic.go:334] "Generic (PLEG): container finished" podID="21c37845-d3f7-4a91-9dc5-e0f8967b5682" containerID="7e96a4a38d1e12bf4634a02bea712f9005834e6cf2a79cf164f53f2f5ad49ac3" exitCode=0 Jan 26 15:55:46 crc kubenswrapper[4713]: I0126 15:55:46.651752 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-dmh4f" event={"ID":"21c37845-d3f7-4a91-9dc5-e0f8967b5682","Type":"ContainerDied","Data":"7e96a4a38d1e12bf4634a02bea712f9005834e6cf2a79cf164f53f2f5ad49ac3"} Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.664490 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r2ngk" event={"ID":"da7f24c3-9e71-41f3-a74e-2ab3daa0efae","Type":"ContainerDied","Data":"5c1ec225aabe239e7e26642cdb7e7f386c9c8e563b87c6a898fc5c00a263bbd0"} Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.664533 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c1ec225aabe239e7e26642cdb7e7f386c9c8e563b87c6a898fc5c00a263bbd0" Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.716323 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.861861 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-scripts\") pod \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.861939 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrz6n\" (UniqueName: \"kubernetes.io/projected/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-kube-api-access-vrz6n\") pod \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.862015 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-config-data\") pod \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.862168 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-combined-ca-bundle\") pod \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.862318 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-credential-keys\") pod \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.862337 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-fernet-keys\") pod \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\" (UID: \"da7f24c3-9e71-41f3-a74e-2ab3daa0efae\") " Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.869577 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-kube-api-access-vrz6n" (OuterVolumeSpecName: "kube-api-access-vrz6n") pod "da7f24c3-9e71-41f3-a74e-2ab3daa0efae" (UID: "da7f24c3-9e71-41f3-a74e-2ab3daa0efae"). InnerVolumeSpecName "kube-api-access-vrz6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.872547 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "da7f24c3-9e71-41f3-a74e-2ab3daa0efae" (UID: "da7f24c3-9e71-41f3-a74e-2ab3daa0efae"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.872631 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-scripts" (OuterVolumeSpecName: "scripts") pod "da7f24c3-9e71-41f3-a74e-2ab3daa0efae" (UID: "da7f24c3-9e71-41f3-a74e-2ab3daa0efae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.872740 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "da7f24c3-9e71-41f3-a74e-2ab3daa0efae" (UID: "da7f24c3-9e71-41f3-a74e-2ab3daa0efae"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.920061 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da7f24c3-9e71-41f3-a74e-2ab3daa0efae" (UID: "da7f24c3-9e71-41f3-a74e-2ab3daa0efae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.920603 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-config-data" (OuterVolumeSpecName: "config-data") pod "da7f24c3-9e71-41f3-a74e-2ab3daa0efae" (UID: "da7f24c3-9e71-41f3-a74e-2ab3daa0efae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.965161 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.965197 4713 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.965209 4713 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.965219 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.965231 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrz6n\" (UniqueName: \"kubernetes.io/projected/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-kube-api-access-vrz6n\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:47 crc kubenswrapper[4713]: I0126 15:55:47.965244 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da7f24c3-9e71-41f3-a74e-2ab3daa0efae-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.247478 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.259087 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.675918 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r2ngk" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.683020 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.808195 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-r2ngk"] Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.825711 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-r2ngk"] Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.900683 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-9vf54"] Jan 26 15:55:48 crc kubenswrapper[4713]: E0126 15:55:48.909006 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7f24c3-9e71-41f3-a74e-2ab3daa0efae" containerName="keystone-bootstrap" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.909043 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7f24c3-9e71-41f3-a74e-2ab3daa0efae" containerName="keystone-bootstrap" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.909220 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="da7f24c3-9e71-41f3-a74e-2ab3daa0efae" containerName="keystone-bootstrap" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.909982 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.916691 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.916785 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2wsv8" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.917002 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.917657 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.917843 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9vf54"] Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.984338 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-combined-ca-bundle\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.984595 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-scripts\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.984768 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-credential-keys\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.984934 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk4cv\" (UniqueName: \"kubernetes.io/projected/848ce8ac-5171-45ab-b1c0-737d4ba93663-kube-api-access-dk4cv\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.985034 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-config-data\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:48 crc kubenswrapper[4713]: I0126 15:55:48.985074 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-fernet-keys\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:49 crc kubenswrapper[4713]: I0126 15:55:49.087148 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-scripts\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:49 crc kubenswrapper[4713]: I0126 15:55:49.087252 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-credential-keys\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:49 crc kubenswrapper[4713]: I0126 15:55:49.087334 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk4cv\" (UniqueName: \"kubernetes.io/projected/848ce8ac-5171-45ab-b1c0-737d4ba93663-kube-api-access-dk4cv\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:49 crc kubenswrapper[4713]: I0126 15:55:49.087407 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-config-data\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:49 crc kubenswrapper[4713]: I0126 15:55:49.087438 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-fernet-keys\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:49 crc kubenswrapper[4713]: I0126 15:55:49.087513 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-combined-ca-bundle\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:49 crc kubenswrapper[4713]: I0126 15:55:49.092550 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-scripts\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:49 crc kubenswrapper[4713]: I0126 15:55:49.093100 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-config-data\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:49 crc kubenswrapper[4713]: I0126 15:55:49.095449 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-credential-keys\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:49 crc kubenswrapper[4713]: I0126 15:55:49.095464 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-fernet-keys\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:49 crc kubenswrapper[4713]: I0126 15:55:49.106022 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-combined-ca-bundle\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:49 crc kubenswrapper[4713]: I0126 15:55:49.118465 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk4cv\" (UniqueName: \"kubernetes.io/projected/848ce8ac-5171-45ab-b1c0-737d4ba93663-kube-api-access-dk4cv\") pod \"keystone-bootstrap-9vf54\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:49 crc kubenswrapper[4713]: I0126 15:55:49.230348 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:55:49 crc kubenswrapper[4713]: I0126 15:55:49.814961 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da7f24c3-9e71-41f3-a74e-2ab3daa0efae" path="/var/lib/kubelet/pods/da7f24c3-9e71-41f3-a74e-2ab3daa0efae/volumes" Jan 26 15:55:54 crc kubenswrapper[4713]: I0126 15:55:54.147721 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-dmh4f" podUID="21c37845-d3f7-4a91-9dc5-e0f8967b5682" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: i/o timeout" Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.267632 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.366490 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-config\") pod \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.366658 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-ovsdbserver-nb\") pod \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.367003 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf25n\" (UniqueName: \"kubernetes.io/projected/21c37845-d3f7-4a91-9dc5-e0f8967b5682-kube-api-access-lf25n\") pod \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.367050 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-dns-svc\") pod \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.367101 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-ovsdbserver-sb\") pod \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\" (UID: \"21c37845-d3f7-4a91-9dc5-e0f8967b5682\") " Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.374094 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c37845-d3f7-4a91-9dc5-e0f8967b5682-kube-api-access-lf25n" (OuterVolumeSpecName: "kube-api-access-lf25n") pod "21c37845-d3f7-4a91-9dc5-e0f8967b5682" (UID: "21c37845-d3f7-4a91-9dc5-e0f8967b5682"). InnerVolumeSpecName "kube-api-access-lf25n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.414665 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-config" (OuterVolumeSpecName: "config") pod "21c37845-d3f7-4a91-9dc5-e0f8967b5682" (UID: "21c37845-d3f7-4a91-9dc5-e0f8967b5682"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.418005 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "21c37845-d3f7-4a91-9dc5-e0f8967b5682" (UID: "21c37845-d3f7-4a91-9dc5-e0f8967b5682"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.419101 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "21c37845-d3f7-4a91-9dc5-e0f8967b5682" (UID: "21c37845-d3f7-4a91-9dc5-e0f8967b5682"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.428210 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "21c37845-d3f7-4a91-9dc5-e0f8967b5682" (UID: "21c37845-d3f7-4a91-9dc5-e0f8967b5682"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.469257 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf25n\" (UniqueName: \"kubernetes.io/projected/21c37845-d3f7-4a91-9dc5-e0f8967b5682-kube-api-access-lf25n\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.469287 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.469297 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.469306 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.469316 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21c37845-d3f7-4a91-9dc5-e0f8967b5682-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:57 crc kubenswrapper[4713]: E0126 15:55:57.628622 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 26 15:55:57 crc kubenswrapper[4713]: E0126 15:55:57.629112 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrj92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-hhmsm_openstack(483861ab-4f8a-485a-91f2-ad78944b7124): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:55:57 crc kubenswrapper[4713]: E0126 15:55:57.630616 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-hhmsm" podUID="483861ab-4f8a-485a-91f2-ad78944b7124" Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.770665 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-dmh4f" event={"ID":"21c37845-d3f7-4a91-9dc5-e0f8967b5682","Type":"ContainerDied","Data":"1e2f6e04792eca48e9437147245b09b944dc147cd39faca99b5152105e02569b"} Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.770748 4713 scope.go:117] "RemoveContainer" containerID="7e96a4a38d1e12bf4634a02bea712f9005834e6cf2a79cf164f53f2f5ad49ac3" Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.770687 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-dmh4f" Jan 26 15:55:57 crc kubenswrapper[4713]: E0126 15:55:57.773124 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-hhmsm" podUID="483861ab-4f8a-485a-91f2-ad78944b7124" Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.858496 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-dmh4f"] Jan 26 15:55:57 crc kubenswrapper[4713]: I0126 15:55:57.871879 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-dmh4f"] Jan 26 15:55:59 crc kubenswrapper[4713]: I0126 15:55:59.149222 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-dmh4f" podUID="21c37845-d3f7-4a91-9dc5-e0f8967b5682" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: i/o timeout" Jan 26 15:55:59 crc kubenswrapper[4713]: I0126 15:55:59.822062 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21c37845-d3f7-4a91-9dc5-e0f8967b5682" path="/var/lib/kubelet/pods/21c37845-d3f7-4a91-9dc5-e0f8967b5682/volumes" Jan 26 15:56:05 crc kubenswrapper[4713]: E0126 15:56:05.803705 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 26 15:56:05 crc kubenswrapper[4713]: E0126 15:56:05.804152 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8626j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-xfq6j_openstack(67bee733-1013-44d9-ac74-5ce552dbb606): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:05 crc kubenswrapper[4713]: E0126 15:56:05.805875 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-xfq6j" podUID="67bee733-1013-44d9-ac74-5ce552dbb606" Jan 26 15:56:06 crc kubenswrapper[4713]: E0126 15:56:06.616030 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 26 15:56:06 crc kubenswrapper[4713]: E0126 15:56:06.616434 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqxr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-h5gr4_openstack(1a3c9534-c956-4a61-a9fe-73026809a2bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:06 crc kubenswrapper[4713]: E0126 15:56:06.617734 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-h5gr4" podUID="1a3c9534-c956-4a61-a9fe-73026809a2bb" Jan 26 15:56:06 crc kubenswrapper[4713]: E0126 15:56:06.894396 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-h5gr4" podUID="1a3c9534-c956-4a61-a9fe-73026809a2bb" Jan 26 15:56:07 crc kubenswrapper[4713]: I0126 15:56:07.030990 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btgqg"] Jan 26 15:56:07 crc kubenswrapper[4713]: E0126 15:56:07.791460 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 26 15:56:07 crc kubenswrapper[4713]: E0126 15:56:07.791873 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc2hn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-nkt8b_openstack(c8a35a5b-49a1-45aa-9090-2aab8a4893ce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:07 crc kubenswrapper[4713]: E0126 15:56:07.793114 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-nkt8b" podUID="c8a35a5b-49a1-45aa-9090-2aab8a4893ce" Jan 26 15:56:07 crc kubenswrapper[4713]: I0126 15:56:07.952049 4713 generic.go:334] "Generic (PLEG): container finished" podID="4e096468-d163-47e3-b23a-be3b1e15d844" containerID="5e9c06949e94b0e9ecd98a54170002f932093f73176c8a23cd3413f84fe164c3" exitCode=0 Jan 26 15:56:07 crc kubenswrapper[4713]: I0126 15:56:07.953968 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9mb6k" event={"ID":"4e096468-d163-47e3-b23a-be3b1e15d844","Type":"ContainerDied","Data":"5e9c06949e94b0e9ecd98a54170002f932093f73176c8a23cd3413f84fe164c3"} Jan 26 15:56:07 crc kubenswrapper[4713]: E0126 15:56:07.966031 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-nkt8b" podUID="c8a35a5b-49a1-45aa-9090-2aab8a4893ce" Jan 26 15:56:10 crc kubenswrapper[4713]: I0126 15:56:10.693789 4713 scope.go:117] "RemoveContainer" containerID="c44bc47d8a5c9e78e2536a5d7972e14bcfd0de123f6a57ad33e6f33c5a9a7e6f" Jan 26 15:56:10 crc kubenswrapper[4713]: I0126 15:56:10.697814 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 15:56:10 crc kubenswrapper[4713]: I0126 15:56:10.845520 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9mb6k" Jan 26 15:56:10 crc kubenswrapper[4713]: I0126 15:56:10.993025 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btgqg" event={"ID":"2c00cbec-fd99-4aee-b111-71c6a9d0cacc","Type":"ContainerStarted","Data":"65528c1b65c1dbb418ca8f2b96e613b043dadb4197d49a5d1e6691733026422d"} Jan 26 15:56:10 crc kubenswrapper[4713]: I0126 15:56:10.994470 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9mb6k" event={"ID":"4e096468-d163-47e3-b23a-be3b1e15d844","Type":"ContainerDied","Data":"93ab8e9c1c35a57901a328c6d73acf4f228743ccb6af6f61cea8cfc86e414064"} Jan 26 15:56:10 crc kubenswrapper[4713]: I0126 15:56:10.994493 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9mb6k" Jan 26 15:56:10 crc kubenswrapper[4713]: I0126 15:56:10.994509 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93ab8e9c1c35a57901a328c6d73acf4f228743ccb6af6f61cea8cfc86e414064" Jan 26 15:56:11 crc kubenswrapper[4713]: I0126 15:56:11.019328 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e096468-d163-47e3-b23a-be3b1e15d844-config\") pod \"4e096468-d163-47e3-b23a-be3b1e15d844\" (UID: \"4e096468-d163-47e3-b23a-be3b1e15d844\") " Jan 26 15:56:11 crc kubenswrapper[4713]: I0126 15:56:11.019526 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e096468-d163-47e3-b23a-be3b1e15d844-combined-ca-bundle\") pod \"4e096468-d163-47e3-b23a-be3b1e15d844\" (UID: \"4e096468-d163-47e3-b23a-be3b1e15d844\") " Jan 26 15:56:11 crc kubenswrapper[4713]: I0126 15:56:11.019574 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk8gw\" (UniqueName: \"kubernetes.io/projected/4e096468-d163-47e3-b23a-be3b1e15d844-kube-api-access-rk8gw\") pod \"4e096468-d163-47e3-b23a-be3b1e15d844\" (UID: \"4e096468-d163-47e3-b23a-be3b1e15d844\") " Jan 26 15:56:11 crc kubenswrapper[4713]: I0126 15:56:11.026606 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e096468-d163-47e3-b23a-be3b1e15d844-kube-api-access-rk8gw" (OuterVolumeSpecName: "kube-api-access-rk8gw") pod "4e096468-d163-47e3-b23a-be3b1e15d844" (UID: "4e096468-d163-47e3-b23a-be3b1e15d844"). InnerVolumeSpecName "kube-api-access-rk8gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:11 crc kubenswrapper[4713]: I0126 15:56:11.052882 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e096468-d163-47e3-b23a-be3b1e15d844-config" (OuterVolumeSpecName: "config") pod "4e096468-d163-47e3-b23a-be3b1e15d844" (UID: "4e096468-d163-47e3-b23a-be3b1e15d844"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:11 crc kubenswrapper[4713]: I0126 15:56:11.053053 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e096468-d163-47e3-b23a-be3b1e15d844-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e096468-d163-47e3-b23a-be3b1e15d844" (UID: "4e096468-d163-47e3-b23a-be3b1e15d844"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:11 crc kubenswrapper[4713]: I0126 15:56:11.122227 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e096468-d163-47e3-b23a-be3b1e15d844-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:11 crc kubenswrapper[4713]: I0126 15:56:11.122279 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e096468-d163-47e3-b23a-be3b1e15d844-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:11 crc kubenswrapper[4713]: I0126 15:56:11.122296 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk8gw\" (UniqueName: \"kubernetes.io/projected/4e096468-d163-47e3-b23a-be3b1e15d844-kube-api-access-rk8gw\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:11 crc kubenswrapper[4713]: I0126 15:56:11.208982 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9vf54"] Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.140567 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54b684dc7c-jsbgp"] Jan 26 15:56:12 crc kubenswrapper[4713]: E0126 15:56:12.141248 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c37845-d3f7-4a91-9dc5-e0f8967b5682" containerName="init" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.141263 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c37845-d3f7-4a91-9dc5-e0f8967b5682" containerName="init" Jan 26 15:56:12 crc kubenswrapper[4713]: E0126 15:56:12.141276 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e096468-d163-47e3-b23a-be3b1e15d844" containerName="neutron-db-sync" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.141284 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e096468-d163-47e3-b23a-be3b1e15d844" containerName="neutron-db-sync" Jan 26 15:56:12 crc kubenswrapper[4713]: E0126 15:56:12.141301 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c37845-d3f7-4a91-9dc5-e0f8967b5682" containerName="dnsmasq-dns" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.141307 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c37845-d3f7-4a91-9dc5-e0f8967b5682" containerName="dnsmasq-dns" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.141933 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c37845-d3f7-4a91-9dc5-e0f8967b5682" containerName="dnsmasq-dns" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.141958 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e096468-d163-47e3-b23a-be3b1e15d844" containerName="neutron-db-sync" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.143691 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.144811 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-ovsdbserver-nb\") pod \"dnsmasq-dns-54b684dc7c-jsbgp\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.144889 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-dns-svc\") pod \"dnsmasq-dns-54b684dc7c-jsbgp\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.144930 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-ovsdbserver-sb\") pod \"dnsmasq-dns-54b684dc7c-jsbgp\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.144954 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqbws\" (UniqueName: \"kubernetes.io/projected/050ed5dc-10cf-4ac6-9109-4905cf0bad39-kube-api-access-qqbws\") pod \"dnsmasq-dns-54b684dc7c-jsbgp\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.145086 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-config\") pod \"dnsmasq-dns-54b684dc7c-jsbgp\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.189686 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b684dc7c-jsbgp"] Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.249326 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66cbb889bd-76zsk"] Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.249417 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-config\") pod \"dnsmasq-dns-54b684dc7c-jsbgp\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.249504 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-ovsdbserver-nb\") pod \"dnsmasq-dns-54b684dc7c-jsbgp\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.249553 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-dns-svc\") pod \"dnsmasq-dns-54b684dc7c-jsbgp\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.249588 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-ovsdbserver-sb\") pod \"dnsmasq-dns-54b684dc7c-jsbgp\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.249615 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqbws\" (UniqueName: \"kubernetes.io/projected/050ed5dc-10cf-4ac6-9109-4905cf0bad39-kube-api-access-qqbws\") pod \"dnsmasq-dns-54b684dc7c-jsbgp\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.251008 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-config\") pod \"dnsmasq-dns-54b684dc7c-jsbgp\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.251075 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.254482 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d4586" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.255348 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-dns-svc\") pod \"dnsmasq-dns-54b684dc7c-jsbgp\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.255433 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-ovsdbserver-sb\") pod \"dnsmasq-dns-54b684dc7c-jsbgp\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.257948 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.258196 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.258353 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.258943 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-ovsdbserver-nb\") pod \"dnsmasq-dns-54b684dc7c-jsbgp\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.263822 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66cbb889bd-76zsk"] Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.300858 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqbws\" (UniqueName: \"kubernetes.io/projected/050ed5dc-10cf-4ac6-9109-4905cf0bad39-kube-api-access-qqbws\") pod \"dnsmasq-dns-54b684dc7c-jsbgp\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.572818 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.576122 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8c7t\" (UniqueName: \"kubernetes.io/projected/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-kube-api-access-w8c7t\") pod \"neutron-66cbb889bd-76zsk\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.576182 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-config\") pod \"neutron-66cbb889bd-76zsk\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.576260 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-ovndb-tls-certs\") pod \"neutron-66cbb889bd-76zsk\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.576307 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-httpd-config\") pod \"neutron-66cbb889bd-76zsk\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.577535 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-combined-ca-bundle\") pod \"neutron-66cbb889bd-76zsk\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.679786 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8c7t\" (UniqueName: \"kubernetes.io/projected/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-kube-api-access-w8c7t\") pod \"neutron-66cbb889bd-76zsk\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.679840 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-config\") pod \"neutron-66cbb889bd-76zsk\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.679891 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-ovndb-tls-certs\") pod \"neutron-66cbb889bd-76zsk\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.679943 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-httpd-config\") pod \"neutron-66cbb889bd-76zsk\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.680053 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-combined-ca-bundle\") pod \"neutron-66cbb889bd-76zsk\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.684462 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-combined-ca-bundle\") pod \"neutron-66cbb889bd-76zsk\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.692121 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-config\") pod \"neutron-66cbb889bd-76zsk\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.693589 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-ovndb-tls-certs\") pod \"neutron-66cbb889bd-76zsk\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.693947 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-httpd-config\") pod \"neutron-66cbb889bd-76zsk\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.707551 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8c7t\" (UniqueName: \"kubernetes.io/projected/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-kube-api-access-w8c7t\") pod \"neutron-66cbb889bd-76zsk\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:12 crc kubenswrapper[4713]: I0126 15:56:12.879541 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:13 crc kubenswrapper[4713]: W0126 15:56:13.968375 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod848ce8ac_5171_45ab_b1c0_737d4ba93663.slice/crio-ba2574bad7a4f9d06ca53ec69f521f6a75d977ba6415ca758841af3569c24ff8 WatchSource:0}: Error finding container ba2574bad7a4f9d06ca53ec69f521f6a75d977ba6415ca758841af3569c24ff8: Status 404 returned error can't find the container with id ba2574bad7a4f9d06ca53ec69f521f6a75d977ba6415ca758841af3569c24ff8 Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.025137 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9vf54" event={"ID":"848ce8ac-5171-45ab-b1c0-737d4ba93663","Type":"ContainerStarted","Data":"ba2574bad7a4f9d06ca53ec69f521f6a75d977ba6415ca758841af3569c24ff8"} Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.559022 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-649645b98f-x7rkr"] Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.562739 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.568400 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.568477 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.579442 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-649645b98f-x7rkr"] Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.724388 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-httpd-config\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.724762 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-ovndb-tls-certs\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.724814 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-internal-tls-certs\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.724857 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-config\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.725001 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-combined-ca-bundle\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.725244 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwqkf\" (UniqueName: \"kubernetes.io/projected/ae985b0b-b47a-4084-904c-e7b10ad3ad76-kube-api-access-hwqkf\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.725311 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-public-tls-certs\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.827210 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-config\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.827285 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-combined-ca-bundle\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.827342 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwqkf\" (UniqueName: \"kubernetes.io/projected/ae985b0b-b47a-4084-904c-e7b10ad3ad76-kube-api-access-hwqkf\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.827392 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-public-tls-certs\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.827448 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-httpd-config\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.827476 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-ovndb-tls-certs\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.827519 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-internal-tls-certs\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.835306 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-ovndb-tls-certs\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.842321 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-config\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.848819 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-combined-ca-bundle\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.853157 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-public-tls-certs\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.854066 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-internal-tls-certs\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.855275 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-httpd-config\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.856228 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwqkf\" (UniqueName: \"kubernetes.io/projected/ae985b0b-b47a-4084-904c-e7b10ad3ad76-kube-api-access-hwqkf\") pod \"neutron-649645b98f-x7rkr\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:14 crc kubenswrapper[4713]: E0126 15:56:14.864588 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Jan 26 15:56:14 crc kubenswrapper[4713]: E0126 15:56:14.864642 4713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Jan 26 15:56:14 crc kubenswrapper[4713]: E0126 15:56:14.864765 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4m7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zhp42_openstack(5c67f072-d970-466d-a3c7-20df7968e5f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:14 crc kubenswrapper[4713]: E0126 15:56:14.866588 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-zhp42" podUID="5c67f072-d970-466d-a3c7-20df7968e5f2" Jan 26 15:56:14 crc kubenswrapper[4713]: I0126 15:56:14.885747 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:15 crc kubenswrapper[4713]: E0126 15:56:15.111526 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-zhp42" podUID="5c67f072-d970-466d-a3c7-20df7968e5f2" Jan 26 15:56:15 crc kubenswrapper[4713]: I0126 15:56:15.452304 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b684dc7c-jsbgp"] Jan 26 15:56:15 crc kubenswrapper[4713]: W0126 15:56:15.469213 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod050ed5dc_10cf_4ac6_9109_4905cf0bad39.slice/crio-bdedb8c643e2445be5b618ae4dec67cfd8c6672ce4bee4a45d9efd39d286c9e3 WatchSource:0}: Error finding container bdedb8c643e2445be5b618ae4dec67cfd8c6672ce4bee4a45d9efd39d286c9e3: Status 404 returned error can't find the container with id bdedb8c643e2445be5b618ae4dec67cfd8c6672ce4bee4a45d9efd39d286c9e3 Jan 26 15:56:15 crc kubenswrapper[4713]: I0126 15:56:15.641773 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66cbb889bd-76zsk"] Jan 26 15:56:15 crc kubenswrapper[4713]: I0126 15:56:15.829659 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-649645b98f-x7rkr"] Jan 26 15:56:16 crc kubenswrapper[4713]: I0126 15:56:16.096710 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d","Type":"ContainerStarted","Data":"248ada6bfa1f1c144e649100bc121acb023ce451181c1d6af73c19256c7014ca"} Jan 26 15:56:16 crc kubenswrapper[4713]: I0126 15:56:16.105480 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hhmsm" event={"ID":"483861ab-4f8a-485a-91f2-ad78944b7124","Type":"ContainerStarted","Data":"0fcee5829eb3135a79b29eceeca390dc3037a5854b7384e2f583ec3bae7763fa"} Jan 26 15:56:16 crc kubenswrapper[4713]: I0126 15:56:16.121215 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"d12d554178fd0a95a4206aecc92037510f8e21a3b02434c64af1ff9f9df3262d"} Jan 26 15:56:16 crc kubenswrapper[4713]: I0126 15:56:16.121264 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"fac42918dc7c6b420d6aae851c0a61dbf6281bbeefb88e377f456b47bb545a24"} Jan 26 15:56:16 crc kubenswrapper[4713]: I0126 15:56:16.129762 4713 generic.go:334] "Generic (PLEG): container finished" podID="2c00cbec-fd99-4aee-b111-71c6a9d0cacc" containerID="b1349437bc5f4a953a0930061706654f25b0937e4c332347c8ee999ace3d4f9c" exitCode=0 Jan 26 15:56:16 crc kubenswrapper[4713]: I0126 15:56:16.129855 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btgqg" event={"ID":"2c00cbec-fd99-4aee-b111-71c6a9d0cacc","Type":"ContainerDied","Data":"b1349437bc5f4a953a0930061706654f25b0937e4c332347c8ee999ace3d4f9c"} Jan 26 15:56:16 crc kubenswrapper[4713]: I0126 15:56:16.132625 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-hhmsm" podStartSLOduration=3.641523743 podStartE2EDuration="42.132604035s" podCreationTimestamp="2026-01-26 15:55:34 +0000 UTC" firstStartedPulling="2026-01-26 15:55:36.633948789 +0000 UTC m=+1311.770966014" lastFinishedPulling="2026-01-26 15:56:15.125029071 +0000 UTC m=+1350.262046306" observedRunningTime="2026-01-26 15:56:16.127816451 +0000 UTC m=+1351.264833696" watchObservedRunningTime="2026-01-26 15:56:16.132604035 +0000 UTC m=+1351.269621270" Jan 26 15:56:16 crc kubenswrapper[4713]: I0126 15:56:16.141080 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9vf54" event={"ID":"848ce8ac-5171-45ab-b1c0-737d4ba93663","Type":"ContainerStarted","Data":"e3e75f97e36457d6181f8b3788e3bfea1ccdf8454baaa33071073ab40398cc16"} Jan 26 15:56:16 crc kubenswrapper[4713]: I0126 15:56:16.154353 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-649645b98f-x7rkr" event={"ID":"ae985b0b-b47a-4084-904c-e7b10ad3ad76","Type":"ContainerStarted","Data":"75a537f936136c3e39a41f73d0c670729f8f142e30a72bc78a65c476b888a922"} Jan 26 15:56:16 crc kubenswrapper[4713]: I0126 15:56:16.170526 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cbb889bd-76zsk" event={"ID":"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4","Type":"ContainerStarted","Data":"a9a8633898422c3f24e3855b554330da6bd513111cbe6b23f404dbe1d9aa5337"} Jan 26 15:56:16 crc kubenswrapper[4713]: I0126 15:56:16.192773 4713 generic.go:334] "Generic (PLEG): container finished" podID="050ed5dc-10cf-4ac6-9109-4905cf0bad39" containerID="029a64d69c193541304d83d822c19ab76ceb4a9fe8895f296a338c41d9385997" exitCode=0 Jan 26 15:56:16 crc kubenswrapper[4713]: I0126 15:56:16.192827 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" event={"ID":"050ed5dc-10cf-4ac6-9109-4905cf0bad39","Type":"ContainerDied","Data":"029a64d69c193541304d83d822c19ab76ceb4a9fe8895f296a338c41d9385997"} Jan 26 15:56:16 crc kubenswrapper[4713]: I0126 15:56:16.192856 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" event={"ID":"050ed5dc-10cf-4ac6-9109-4905cf0bad39","Type":"ContainerStarted","Data":"bdedb8c643e2445be5b618ae4dec67cfd8c6672ce4bee4a45d9efd39d286c9e3"} Jan 26 15:56:16 crc kubenswrapper[4713]: I0126 15:56:16.240436 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-9vf54" podStartSLOduration=28.240416183 podStartE2EDuration="28.240416183s" podCreationTimestamp="2026-01-26 15:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:16.169032768 +0000 UTC m=+1351.306050003" watchObservedRunningTime="2026-01-26 15:56:16.240416183 +0000 UTC m=+1351.377433418" Jan 26 15:56:17 crc kubenswrapper[4713]: I0126 15:56:17.255816 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"5e95b7f447f7df4772db27df7fcd43aa54c9deb3718a110668ca7501dfd2feec"} Jan 26 15:56:17 crc kubenswrapper[4713]: I0126 15:56:17.266640 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-649645b98f-x7rkr" event={"ID":"ae985b0b-b47a-4084-904c-e7b10ad3ad76","Type":"ContainerStarted","Data":"e15aac916f031b21242a43b9fcf8a0a5d520031395e5d5d46d3c72531d566d70"} Jan 26 15:56:17 crc kubenswrapper[4713]: I0126 15:56:17.269058 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cbb889bd-76zsk" event={"ID":"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4","Type":"ContainerStarted","Data":"b28def777f3a3cfb8248eb3963b9717c379abbc9050e3ae059af3b0f99f1c763"} Jan 26 15:56:17 crc kubenswrapper[4713]: I0126 15:56:17.269114 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cbb889bd-76zsk" event={"ID":"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4","Type":"ContainerStarted","Data":"59698a5520c4575e55acb5ccb5abe8d4aaec4d15a9112979c654bda564134150"} Jan 26 15:56:17 crc kubenswrapper[4713]: I0126 15:56:17.684009 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btgqg" Jan 26 15:56:17 crc kubenswrapper[4713]: I0126 15:56:17.733491 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx9sf\" (UniqueName: \"kubernetes.io/projected/2c00cbec-fd99-4aee-b111-71c6a9d0cacc-kube-api-access-nx9sf\") pod \"2c00cbec-fd99-4aee-b111-71c6a9d0cacc\" (UID: \"2c00cbec-fd99-4aee-b111-71c6a9d0cacc\") " Jan 26 15:56:17 crc kubenswrapper[4713]: I0126 15:56:17.733805 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c00cbec-fd99-4aee-b111-71c6a9d0cacc-operator-scripts\") pod \"2c00cbec-fd99-4aee-b111-71c6a9d0cacc\" (UID: \"2c00cbec-fd99-4aee-b111-71c6a9d0cacc\") " Jan 26 15:56:17 crc kubenswrapper[4713]: I0126 15:56:17.741822 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c00cbec-fd99-4aee-b111-71c6a9d0cacc-kube-api-access-nx9sf" (OuterVolumeSpecName: "kube-api-access-nx9sf") pod "2c00cbec-fd99-4aee-b111-71c6a9d0cacc" (UID: "2c00cbec-fd99-4aee-b111-71c6a9d0cacc"). InnerVolumeSpecName "kube-api-access-nx9sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:17 crc kubenswrapper[4713]: I0126 15:56:17.743091 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c00cbec-fd99-4aee-b111-71c6a9d0cacc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c00cbec-fd99-4aee-b111-71c6a9d0cacc" (UID: "2c00cbec-fd99-4aee-b111-71c6a9d0cacc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:56:17 crc kubenswrapper[4713]: I0126 15:56:17.807927 4713 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:56:17 crc kubenswrapper[4713]: I0126 15:56:17.838020 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx9sf\" (UniqueName: \"kubernetes.io/projected/2c00cbec-fd99-4aee-b111-71c6a9d0cacc-kube-api-access-nx9sf\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:17 crc kubenswrapper[4713]: I0126 15:56:17.838410 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c00cbec-fd99-4aee-b111-71c6a9d0cacc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:18 crc kubenswrapper[4713]: I0126 15:56:18.299170 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"7ccfab88dc7b2605baff392f16798ca0e5d1513487e95bf833b1032a714de54c"} Jan 26 15:56:18 crc kubenswrapper[4713]: I0126 15:56:18.301431 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btgqg" event={"ID":"2c00cbec-fd99-4aee-b111-71c6a9d0cacc","Type":"ContainerDied","Data":"65528c1b65c1dbb418ca8f2b96e613b043dadb4197d49a5d1e6691733026422d"} Jan 26 15:56:18 crc kubenswrapper[4713]: I0126 15:56:18.301461 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65528c1b65c1dbb418ca8f2b96e613b043dadb4197d49a5d1e6691733026422d" Jan 26 15:56:18 crc kubenswrapper[4713]: I0126 15:56:18.301492 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btgqg" Jan 26 15:56:18 crc kubenswrapper[4713]: I0126 15:56:18.308755 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-649645b98f-x7rkr" event={"ID":"ae985b0b-b47a-4084-904c-e7b10ad3ad76","Type":"ContainerStarted","Data":"458739c55da0b0808d87285bb5f34dcd76ee12612ddf0d6d3277564b0bba017b"} Jan 26 15:56:18 crc kubenswrapper[4713]: I0126 15:56:18.310958 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:18 crc kubenswrapper[4713]: I0126 15:56:18.320280 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" event={"ID":"050ed5dc-10cf-4ac6-9109-4905cf0bad39","Type":"ContainerStarted","Data":"c5efe2fac3f90330afe61bdf4cf24d9be4a85ebc581b70ee00d028ac557124fe"} Jan 26 15:56:18 crc kubenswrapper[4713]: I0126 15:56:18.320417 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:18 crc kubenswrapper[4713]: I0126 15:56:18.357598 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-649645b98f-x7rkr" podStartSLOduration=4.357574265 podStartE2EDuration="4.357574265s" podCreationTimestamp="2026-01-26 15:56:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:18.339532913 +0000 UTC m=+1353.476550148" watchObservedRunningTime="2026-01-26 15:56:18.357574265 +0000 UTC m=+1353.494591490" Jan 26 15:56:18 crc kubenswrapper[4713]: I0126 15:56:18.377038 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66cbb889bd-76zsk" podStartSLOduration=6.377016556 podStartE2EDuration="6.377016556s" podCreationTimestamp="2026-01-26 15:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:18.372466429 +0000 UTC m=+1353.509483664" watchObservedRunningTime="2026-01-26 15:56:18.377016556 +0000 UTC m=+1353.514033791" Jan 26 15:56:18 crc kubenswrapper[4713]: I0126 15:56:18.408532 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" podStartSLOduration=6.408509772 podStartE2EDuration="6.408509772s" podCreationTimestamp="2026-01-26 15:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:18.401694412 +0000 UTC m=+1353.538711647" watchObservedRunningTime="2026-01-26 15:56:18.408509772 +0000 UTC m=+1353.545527007" Jan 26 15:56:19 crc kubenswrapper[4713]: I0126 15:56:19.339283 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"b8b48e592dceac97547e415584dc1971ebadc86da49007281865c9fd1fa3afab"} Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.352572 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d","Type":"ContainerStarted","Data":"4b2bb803f267d70ea0cbe153d58ec8178d95b9c82a8c3ed96d903b08161de957"} Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.355961 4713 generic.go:334] "Generic (PLEG): container finished" podID="483861ab-4f8a-485a-91f2-ad78944b7124" containerID="0fcee5829eb3135a79b29eceeca390dc3037a5854b7384e2f583ec3bae7763fa" exitCode=0 Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.356047 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hhmsm" event={"ID":"483861ab-4f8a-485a-91f2-ad78944b7124","Type":"ContainerDied","Data":"0fcee5829eb3135a79b29eceeca390dc3037a5854b7384e2f583ec3bae7763fa"} Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.381939 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"7e4510da85a865ba9ef3a4b7818be537743e838c6b470d2eef39ef4091a50701"} Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.381998 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0432b2d-538e-4b04-899b-6fe666f340de","Type":"ContainerStarted","Data":"696a576576a07edfe4a04422c4340f18ac2d0d75bcd66a5c87a95894affbafd0"} Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.383911 4713 generic.go:334] "Generic (PLEG): container finished" podID="848ce8ac-5171-45ab-b1c0-737d4ba93663" containerID="e3e75f97e36457d6181f8b3788e3bfea1ccdf8454baaa33071073ab40398cc16" exitCode=0 Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.383980 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9vf54" event={"ID":"848ce8ac-5171-45ab-b1c0-737d4ba93663","Type":"ContainerDied","Data":"e3e75f97e36457d6181f8b3788e3bfea1ccdf8454baaa33071073ab40398cc16"} Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.394821 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5gr4" event={"ID":"1a3c9534-c956-4a61-a9fe-73026809a2bb","Type":"ContainerStarted","Data":"0e96aca03a12ea97b933d81698aaa79cdb2240ef0ada34d64fd5f83bc6efeae4"} Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.434287 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=43.61410318 podStartE2EDuration="1m36.434267522s" podCreationTimestamp="2026-01-26 15:54:44 +0000 UTC" firstStartedPulling="2026-01-26 15:55:22.009649319 +0000 UTC m=+1297.146666554" lastFinishedPulling="2026-01-26 15:56:14.829813661 +0000 UTC m=+1349.966830896" observedRunningTime="2026-01-26 15:56:20.418300438 +0000 UTC m=+1355.555317673" watchObservedRunningTime="2026-01-26 15:56:20.434267522 +0000 UTC m=+1355.571284757" Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.488319 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-h5gr4" podStartSLOduration=4.261554556 podStartE2EDuration="46.488275804s" podCreationTimestamp="2026-01-26 15:55:34 +0000 UTC" firstStartedPulling="2026-01-26 15:55:36.794152365 +0000 UTC m=+1311.931169600" lastFinishedPulling="2026-01-26 15:56:19.020873613 +0000 UTC m=+1354.157890848" observedRunningTime="2026-01-26 15:56:20.465514881 +0000 UTC m=+1355.602532126" watchObservedRunningTime="2026-01-26 15:56:20.488275804 +0000 UTC m=+1355.625293039" Jan 26 15:56:20 crc kubenswrapper[4713]: E0126 15:56:20.808322 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-xfq6j" podUID="67bee733-1013-44d9-ac74-5ce552dbb606" Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.845278 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b684dc7c-jsbgp"] Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.845590 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" podUID="050ed5dc-10cf-4ac6-9109-4905cf0bad39" containerName="dnsmasq-dns" containerID="cri-o://c5efe2fac3f90330afe61bdf4cf24d9be4a85ebc581b70ee00d028ac557124fe" gracePeriod=10 Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.893477 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-glnzf"] Jan 26 15:56:20 crc kubenswrapper[4713]: E0126 15:56:20.894088 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c00cbec-fd99-4aee-b111-71c6a9d0cacc" containerName="mariadb-account-create-update" Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.894127 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c00cbec-fd99-4aee-b111-71c6a9d0cacc" containerName="mariadb-account-create-update" Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.896959 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c00cbec-fd99-4aee-b111-71c6a9d0cacc" containerName="mariadb-account-create-update" Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.899932 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.906453 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.939250 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-glnzf"] Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.955996 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7254\" (UniqueName: \"kubernetes.io/projected/23a6554a-422b-4fb1-a6c6-e99368e2b129-kube-api-access-n7254\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.956300 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.956507 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.956614 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.956981 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-config\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:20 crc kubenswrapper[4713]: I0126 15:56:20.957185 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.061952 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.062051 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7254\" (UniqueName: \"kubernetes.io/projected/23a6554a-422b-4fb1-a6c6-e99368e2b129-kube-api-access-n7254\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.062097 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.062127 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.062152 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.062794 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-config\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.063040 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.063346 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.063778 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.063790 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.063900 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-config\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.096816 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7254\" (UniqueName: \"kubernetes.io/projected/23a6554a-422b-4fb1-a6c6-e99368e2b129-kube-api-access-n7254\") pod \"dnsmasq-dns-7d88d7b95f-glnzf\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.311665 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.413271 4713 generic.go:334] "Generic (PLEG): container finished" podID="050ed5dc-10cf-4ac6-9109-4905cf0bad39" containerID="c5efe2fac3f90330afe61bdf4cf24d9be4a85ebc581b70ee00d028ac557124fe" exitCode=0 Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.414111 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" event={"ID":"050ed5dc-10cf-4ac6-9109-4905cf0bad39","Type":"ContainerDied","Data":"c5efe2fac3f90330afe61bdf4cf24d9be4a85ebc581b70ee00d028ac557124fe"} Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.414175 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" event={"ID":"050ed5dc-10cf-4ac6-9109-4905cf0bad39","Type":"ContainerDied","Data":"bdedb8c643e2445be5b618ae4dec67cfd8c6672ce4bee4a45d9efd39d286c9e3"} Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.414186 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdedb8c643e2445be5b618ae4dec67cfd8c6672ce4bee4a45d9efd39d286c9e3" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.502088 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.675481 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-ovsdbserver-nb\") pod \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.677209 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-config\") pod \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.677245 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-dns-svc\") pod \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.677386 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbws\" (UniqueName: \"kubernetes.io/projected/050ed5dc-10cf-4ac6-9109-4905cf0bad39-kube-api-access-qqbws\") pod \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.677456 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-ovsdbserver-sb\") pod \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\" (UID: \"050ed5dc-10cf-4ac6-9109-4905cf0bad39\") " Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.711670 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050ed5dc-10cf-4ac6-9109-4905cf0bad39-kube-api-access-qqbws" (OuterVolumeSpecName: "kube-api-access-qqbws") pod "050ed5dc-10cf-4ac6-9109-4905cf0bad39" (UID: "050ed5dc-10cf-4ac6-9109-4905cf0bad39"). InnerVolumeSpecName "kube-api-access-qqbws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.783025 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqbws\" (UniqueName: \"kubernetes.io/projected/050ed5dc-10cf-4ac6-9109-4905cf0bad39-kube-api-access-qqbws\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.801126 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "050ed5dc-10cf-4ac6-9109-4905cf0bad39" (UID: "050ed5dc-10cf-4ac6-9109-4905cf0bad39"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.805085 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-config" (OuterVolumeSpecName: "config") pod "050ed5dc-10cf-4ac6-9109-4905cf0bad39" (UID: "050ed5dc-10cf-4ac6-9109-4905cf0bad39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.859524 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "050ed5dc-10cf-4ac6-9109-4905cf0bad39" (UID: "050ed5dc-10cf-4ac6-9109-4905cf0bad39"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.866736 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "050ed5dc-10cf-4ac6-9109-4905cf0bad39" (UID: "050ed5dc-10cf-4ac6-9109-4905cf0bad39"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.884905 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.884956 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.884969 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:21 crc kubenswrapper[4713]: I0126 15:56:21.884977 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/050ed5dc-10cf-4ac6-9109-4905cf0bad39-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.097162 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-glnzf"] Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.289209 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hhmsm" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.318708 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.399000 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/483861ab-4f8a-485a-91f2-ad78944b7124-logs\") pod \"483861ab-4f8a-485a-91f2-ad78944b7124\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.399060 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-config-data\") pod \"483861ab-4f8a-485a-91f2-ad78944b7124\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.399124 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrj92\" (UniqueName: \"kubernetes.io/projected/483861ab-4f8a-485a-91f2-ad78944b7124-kube-api-access-jrj92\") pod \"483861ab-4f8a-485a-91f2-ad78944b7124\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.399189 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-combined-ca-bundle\") pod \"483861ab-4f8a-485a-91f2-ad78944b7124\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.399298 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-scripts\") pod \"483861ab-4f8a-485a-91f2-ad78944b7124\" (UID: \"483861ab-4f8a-485a-91f2-ad78944b7124\") " Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.408812 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/483861ab-4f8a-485a-91f2-ad78944b7124-logs" (OuterVolumeSpecName: "logs") pod "483861ab-4f8a-485a-91f2-ad78944b7124" (UID: "483861ab-4f8a-485a-91f2-ad78944b7124"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.412898 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-scripts" (OuterVolumeSpecName: "scripts") pod "483861ab-4f8a-485a-91f2-ad78944b7124" (UID: "483861ab-4f8a-485a-91f2-ad78944b7124"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.413961 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483861ab-4f8a-485a-91f2-ad78944b7124-kube-api-access-jrj92" (OuterVolumeSpecName: "kube-api-access-jrj92") pod "483861ab-4f8a-485a-91f2-ad78944b7124" (UID: "483861ab-4f8a-485a-91f2-ad78944b7124"). InnerVolumeSpecName "kube-api-access-jrj92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.438834 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" event={"ID":"23a6554a-422b-4fb1-a6c6-e99368e2b129","Type":"ContainerStarted","Data":"caa0124994d657c783765e1846f9bb2e9e228ac753f2430405c3f575ce8d1a12"} Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.440792 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-config-data" (OuterVolumeSpecName: "config-data") pod "483861ab-4f8a-485a-91f2-ad78944b7124" (UID: "483861ab-4f8a-485a-91f2-ad78944b7124"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.443264 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hhmsm" event={"ID":"483861ab-4f8a-485a-91f2-ad78944b7124","Type":"ContainerDied","Data":"dff5dc9efcf5f6e767d28391b0df45e9129f169a34844985a20ab53e09a28a02"} Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.443303 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dff5dc9efcf5f6e767d28391b0df45e9129f169a34844985a20ab53e09a28a02" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.443378 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hhmsm" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.446011 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b684dc7c-jsbgp" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.446346 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9vf54" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.446834 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9vf54" event={"ID":"848ce8ac-5171-45ab-b1c0-737d4ba93663","Type":"ContainerDied","Data":"ba2574bad7a4f9d06ca53ec69f521f6a75d977ba6415ca758841af3569c24ff8"} Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.446864 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba2574bad7a4f9d06ca53ec69f521f6a75d977ba6415ca758841af3569c24ff8" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.462253 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "483861ab-4f8a-485a-91f2-ad78944b7124" (UID: "483861ab-4f8a-485a-91f2-ad78944b7124"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.505321 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-scripts\") pod \"848ce8ac-5171-45ab-b1c0-737d4ba93663\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.505461 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-credential-keys\") pod \"848ce8ac-5171-45ab-b1c0-737d4ba93663\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.505581 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-combined-ca-bundle\") pod \"848ce8ac-5171-45ab-b1c0-737d4ba93663\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.505637 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-fernet-keys\") pod \"848ce8ac-5171-45ab-b1c0-737d4ba93663\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.505681 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-config-data\") pod \"848ce8ac-5171-45ab-b1c0-737d4ba93663\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.505723 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk4cv\" (UniqueName: \"kubernetes.io/projected/848ce8ac-5171-45ab-b1c0-737d4ba93663-kube-api-access-dk4cv\") pod \"848ce8ac-5171-45ab-b1c0-737d4ba93663\" (UID: \"848ce8ac-5171-45ab-b1c0-737d4ba93663\") " Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.508338 4713 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/483861ab-4f8a-485a-91f2-ad78944b7124-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.511100 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.511120 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrj92\" (UniqueName: \"kubernetes.io/projected/483861ab-4f8a-485a-91f2-ad78944b7124-kube-api-access-jrj92\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.511135 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.511178 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/483861ab-4f8a-485a-91f2-ad78944b7124-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.518895 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/848ce8ac-5171-45ab-b1c0-737d4ba93663-kube-api-access-dk4cv" (OuterVolumeSpecName: "kube-api-access-dk4cv") pod "848ce8ac-5171-45ab-b1c0-737d4ba93663" (UID: "848ce8ac-5171-45ab-b1c0-737d4ba93663"). InnerVolumeSpecName "kube-api-access-dk4cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.536837 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b684dc7c-jsbgp"] Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.539079 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-scripts" (OuterVolumeSpecName: "scripts") pod "848ce8ac-5171-45ab-b1c0-737d4ba93663" (UID: "848ce8ac-5171-45ab-b1c0-737d4ba93663"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.539175 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "848ce8ac-5171-45ab-b1c0-737d4ba93663" (UID: "848ce8ac-5171-45ab-b1c0-737d4ba93663"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.539213 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "848ce8ac-5171-45ab-b1c0-737d4ba93663" (UID: "848ce8ac-5171-45ab-b1c0-737d4ba93663"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.542707 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-config-data" (OuterVolumeSpecName: "config-data") pod "848ce8ac-5171-45ab-b1c0-737d4ba93663" (UID: "848ce8ac-5171-45ab-b1c0-737d4ba93663"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.548191 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "848ce8ac-5171-45ab-b1c0-737d4ba93663" (UID: "848ce8ac-5171-45ab-b1c0-737d4ba93663"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.550789 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54b684dc7c-jsbgp"] Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.613918 4713 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.613963 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.613977 4713 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.613987 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.614000 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk4cv\" (UniqueName: \"kubernetes.io/projected/848ce8ac-5171-45ab-b1c0-737d4ba93663-kube-api-access-dk4cv\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.614014 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848ce8ac-5171-45ab-b1c0-737d4ba93663-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.730694 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7d456999d-27w6v"] Jan 26 15:56:22 crc kubenswrapper[4713]: E0126 15:56:22.731352 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050ed5dc-10cf-4ac6-9109-4905cf0bad39" containerName="init" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.731433 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="050ed5dc-10cf-4ac6-9109-4905cf0bad39" containerName="init" Jan 26 15:56:22 crc kubenswrapper[4713]: E0126 15:56:22.731460 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050ed5dc-10cf-4ac6-9109-4905cf0bad39" containerName="dnsmasq-dns" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.731468 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="050ed5dc-10cf-4ac6-9109-4905cf0bad39" containerName="dnsmasq-dns" Jan 26 15:56:22 crc kubenswrapper[4713]: E0126 15:56:22.731479 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483861ab-4f8a-485a-91f2-ad78944b7124" containerName="placement-db-sync" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.731490 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="483861ab-4f8a-485a-91f2-ad78944b7124" containerName="placement-db-sync" Jan 26 15:56:22 crc kubenswrapper[4713]: E0126 15:56:22.731506 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848ce8ac-5171-45ab-b1c0-737d4ba93663" containerName="keystone-bootstrap" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.731515 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="848ce8ac-5171-45ab-b1c0-737d4ba93663" containerName="keystone-bootstrap" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.731805 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="848ce8ac-5171-45ab-b1c0-737d4ba93663" containerName="keystone-bootstrap" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.731844 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="483861ab-4f8a-485a-91f2-ad78944b7124" containerName="placement-db-sync" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.731863 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="050ed5dc-10cf-4ac6-9109-4905cf0bad39" containerName="dnsmasq-dns" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.732824 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.738052 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.738292 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.767400 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6bb4458d9d-r4dmr"] Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.773096 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.776101 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ncjrr" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.776283 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.776443 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.776571 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.778700 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.814217 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d456999d-27w6v"] Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.829398 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bb4458d9d-r4dmr"] Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.920144 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-fernet-keys\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.920544 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-scripts\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.920596 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-internal-tls-certs\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.920630 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-config-data\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.920702 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf40e2be-eb43-4c3d-aa4e-58c164059384-public-tls-certs\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.920724 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-public-tls-certs\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.920760 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-credential-keys\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.920777 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf40e2be-eb43-4c3d-aa4e-58c164059384-internal-tls-certs\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.920806 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf40e2be-eb43-4c3d-aa4e-58c164059384-scripts\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.920825 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-combined-ca-bundle\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.920899 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbdlp\" (UniqueName: \"kubernetes.io/projected/bf40e2be-eb43-4c3d-aa4e-58c164059384-kube-api-access-pbdlp\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.920935 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf40e2be-eb43-4c3d-aa4e-58c164059384-logs\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.920968 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf40e2be-eb43-4c3d-aa4e-58c164059384-config-data\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.920996 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg5dl\" (UniqueName: \"kubernetes.io/projected/7909e8d5-a534-4178-9f85-70c7b10eae4e-kube-api-access-zg5dl\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:22 crc kubenswrapper[4713]: I0126 15:56:22.921034 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf40e2be-eb43-4c3d-aa4e-58c164059384-combined-ca-bundle\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.023803 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf40e2be-eb43-4c3d-aa4e-58c164059384-config-data\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.024654 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg5dl\" (UniqueName: \"kubernetes.io/projected/7909e8d5-a534-4178-9f85-70c7b10eae4e-kube-api-access-zg5dl\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.024704 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf40e2be-eb43-4c3d-aa4e-58c164059384-combined-ca-bundle\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.024729 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-fernet-keys\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.024786 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-scripts\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.024825 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-internal-tls-certs\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.024885 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-config-data\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.024961 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf40e2be-eb43-4c3d-aa4e-58c164059384-public-tls-certs\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.024985 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-public-tls-certs\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.025028 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-credential-keys\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.025049 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf40e2be-eb43-4c3d-aa4e-58c164059384-internal-tls-certs\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.025108 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf40e2be-eb43-4c3d-aa4e-58c164059384-scripts\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.025133 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-combined-ca-bundle\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.025223 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbdlp\" (UniqueName: \"kubernetes.io/projected/bf40e2be-eb43-4c3d-aa4e-58c164059384-kube-api-access-pbdlp\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.025268 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf40e2be-eb43-4c3d-aa4e-58c164059384-logs\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.025730 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf40e2be-eb43-4c3d-aa4e-58c164059384-logs\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.034222 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-public-tls-certs\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.035959 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-scripts\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.036279 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf40e2be-eb43-4c3d-aa4e-58c164059384-combined-ca-bundle\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.037628 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-fernet-keys\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.037763 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf40e2be-eb43-4c3d-aa4e-58c164059384-public-tls-certs\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.037783 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf40e2be-eb43-4c3d-aa4e-58c164059384-scripts\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.038315 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-combined-ca-bundle\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.038850 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-config-data\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.038965 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-internal-tls-certs\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.042854 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf40e2be-eb43-4c3d-aa4e-58c164059384-config-data\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.043370 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7909e8d5-a534-4178-9f85-70c7b10eae4e-credential-keys\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.067082 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg5dl\" (UniqueName: \"kubernetes.io/projected/7909e8d5-a534-4178-9f85-70c7b10eae4e-kube-api-access-zg5dl\") pod \"keystone-7d456999d-27w6v\" (UID: \"7909e8d5-a534-4178-9f85-70c7b10eae4e\") " pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.069185 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbdlp\" (UniqueName: \"kubernetes.io/projected/bf40e2be-eb43-4c3d-aa4e-58c164059384-kube-api-access-pbdlp\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.076937 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf40e2be-eb43-4c3d-aa4e-58c164059384-internal-tls-certs\") pod \"placement-6bb4458d9d-r4dmr\" (UID: \"bf40e2be-eb43-4c3d-aa4e-58c164059384\") " pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.109958 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.356101 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.819098 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050ed5dc-10cf-4ac6-9109-4905cf0bad39" path="/var/lib/kubelet/pods/050ed5dc-10cf-4ac6-9109-4905cf0bad39/volumes" Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.856707 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bb4458d9d-r4dmr"] Jan 26 15:56:23 crc kubenswrapper[4713]: I0126 15:56:23.979428 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d456999d-27w6v"] Jan 26 15:56:24 crc kubenswrapper[4713]: I0126 15:56:24.513688 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bb4458d9d-r4dmr" event={"ID":"bf40e2be-eb43-4c3d-aa4e-58c164059384","Type":"ContainerStarted","Data":"04d61d48ee39903ea6f40323901b87b262deb2b92f20b2a1ea024220532ef717"} Jan 26 15:56:27 crc kubenswrapper[4713]: I0126 15:56:27.546868 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" event={"ID":"23a6554a-422b-4fb1-a6c6-e99368e2b129","Type":"ContainerStarted","Data":"a7a69157a1fa955305fbbd18bd17942a335936248b8cc2f8540cd03b77aec694"} Jan 26 15:56:28 crc kubenswrapper[4713]: I0126 15:56:28.558740 4713 generic.go:334] "Generic (PLEG): container finished" podID="23a6554a-422b-4fb1-a6c6-e99368e2b129" containerID="a7a69157a1fa955305fbbd18bd17942a335936248b8cc2f8540cd03b77aec694" exitCode=0 Jan 26 15:56:28 crc kubenswrapper[4713]: I0126 15:56:28.558785 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" event={"ID":"23a6554a-422b-4fb1-a6c6-e99368e2b129","Type":"ContainerDied","Data":"a7a69157a1fa955305fbbd18bd17942a335936248b8cc2f8540cd03b77aec694"} Jan 26 15:56:30 crc kubenswrapper[4713]: W0126 15:56:30.604489 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7909e8d5_a534_4178_9f85_70c7b10eae4e.slice/crio-5467248f0f760b43b536ca88812f62b421644a00030a22ccb2dcadb9cb3d619a WatchSource:0}: Error finding container 5467248f0f760b43b536ca88812f62b421644a00030a22ccb2dcadb9cb3d619a: Status 404 returned error can't find the container with id 5467248f0f760b43b536ca88812f62b421644a00030a22ccb2dcadb9cb3d619a Jan 26 15:56:31 crc kubenswrapper[4713]: I0126 15:56:31.595932 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d456999d-27w6v" event={"ID":"7909e8d5-a534-4178-9f85-70c7b10eae4e","Type":"ContainerStarted","Data":"5467248f0f760b43b536ca88812f62b421644a00030a22ccb2dcadb9cb3d619a"} Jan 26 15:56:32 crc kubenswrapper[4713]: I0126 15:56:32.610566 4713 generic.go:334] "Generic (PLEG): container finished" podID="1a3c9534-c956-4a61-a9fe-73026809a2bb" containerID="0e96aca03a12ea97b933d81698aaa79cdb2240ef0ada34d64fd5f83bc6efeae4" exitCode=0 Jan 26 15:56:32 crc kubenswrapper[4713]: I0126 15:56:32.610635 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5gr4" event={"ID":"1a3c9534-c956-4a61-a9fe-73026809a2bb","Type":"ContainerDied","Data":"0e96aca03a12ea97b933d81698aaa79cdb2240ef0ada34d64fd5f83bc6efeae4"} Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.301689 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.302248 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.623185 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nkt8b" event={"ID":"c8a35a5b-49a1-45aa-9090-2aab8a4893ce","Type":"ContainerStarted","Data":"ed6ab4f817e2e10891a8f9cb34536e375e46daf715bdad954c2881a2c8dc5a84"} Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.625275 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" event={"ID":"23a6554a-422b-4fb1-a6c6-e99368e2b129","Type":"ContainerStarted","Data":"47186be8e371abfd58579a55641771c689f57161c91fe3344935533e40ab24a7"} Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.625407 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.627741 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d","Type":"ContainerStarted","Data":"50a5f91b480d40ee19ed35ffb2b70d952e0c5c385f05527b772d012043befff2"} Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.629342 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-zhp42" event={"ID":"5c67f072-d970-466d-a3c7-20df7968e5f2","Type":"ContainerStarted","Data":"076a4eb5adb1b98c64b8bdac5b8dd9c53fa332d03927d59c78a995aaf3a4c5cc"} Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.631431 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d456999d-27w6v" event={"ID":"7909e8d5-a534-4178-9f85-70c7b10eae4e","Type":"ContainerStarted","Data":"09e8a60bbbc06cc834f42752bd0526e92c7af41bc41b99bfe45baf124a25e19f"} Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.631560 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.633228 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bb4458d9d-r4dmr" event={"ID":"bf40e2be-eb43-4c3d-aa4e-58c164059384","Type":"ContainerStarted","Data":"24de9f40e17794dff1969b54078983f6b3a62f524b5ead72b645c3c32c8fcaf4"} Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.633263 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bb4458d9d-r4dmr" event={"ID":"bf40e2be-eb43-4c3d-aa4e-58c164059384","Type":"ContainerStarted","Data":"050f6dc8701623fe3567a548e2d9a55891caaffc0570cf09629b569012e67e31"} Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.692093 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-zhp42" podStartSLOduration=3.393290032 podStartE2EDuration="59.692069255s" podCreationTimestamp="2026-01-26 15:55:34 +0000 UTC" firstStartedPulling="2026-01-26 15:55:36.020837377 +0000 UTC m=+1311.157854612" lastFinishedPulling="2026-01-26 15:56:32.3196166 +0000 UTC m=+1367.456633835" observedRunningTime="2026-01-26 15:56:33.668106842 +0000 UTC m=+1368.805124077" watchObservedRunningTime="2026-01-26 15:56:33.692069255 +0000 UTC m=+1368.829086490" Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.707397 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-nkt8b" podStartSLOduration=3.803229651 podStartE2EDuration="59.707360415s" podCreationTimestamp="2026-01-26 15:55:34 +0000 UTC" firstStartedPulling="2026-01-26 15:55:36.298059237 +0000 UTC m=+1311.435076472" lastFinishedPulling="2026-01-26 15:56:32.202189991 +0000 UTC m=+1367.339207236" observedRunningTime="2026-01-26 15:56:33.64524766 +0000 UTC m=+1368.782264905" watchObservedRunningTime="2026-01-26 15:56:33.707360415 +0000 UTC m=+1368.844377640" Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.717170 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7d456999d-27w6v" podStartSLOduration=11.71714007 podStartE2EDuration="11.71714007s" podCreationTimestamp="2026-01-26 15:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:33.691778957 +0000 UTC m=+1368.828796212" watchObservedRunningTime="2026-01-26 15:56:33.71714007 +0000 UTC m=+1368.854157305" Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.725580 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" podStartSLOduration=13.725561766 podStartE2EDuration="13.725561766s" podCreationTimestamp="2026-01-26 15:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:33.720851214 +0000 UTC m=+1368.857868449" watchObservedRunningTime="2026-01-26 15:56:33.725561766 +0000 UTC m=+1368.862579001" Jan 26 15:56:33 crc kubenswrapper[4713]: I0126 15:56:33.765586 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6bb4458d9d-r4dmr" podStartSLOduration=11.76556268 podStartE2EDuration="11.76556268s" podCreationTimestamp="2026-01-26 15:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:33.748499121 +0000 UTC m=+1368.885516356" watchObservedRunningTime="2026-01-26 15:56:33.76556268 +0000 UTC m=+1368.902579935" Jan 26 15:56:34 crc kubenswrapper[4713]: I0126 15:56:34.082652 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5gr4" Jan 26 15:56:34 crc kubenswrapper[4713]: I0126 15:56:34.164959 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a3c9534-c956-4a61-a9fe-73026809a2bb-db-sync-config-data\") pod \"1a3c9534-c956-4a61-a9fe-73026809a2bb\" (UID: \"1a3c9534-c956-4a61-a9fe-73026809a2bb\") " Jan 26 15:56:34 crc kubenswrapper[4713]: I0126 15:56:34.165052 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqxr6\" (UniqueName: \"kubernetes.io/projected/1a3c9534-c956-4a61-a9fe-73026809a2bb-kube-api-access-rqxr6\") pod \"1a3c9534-c956-4a61-a9fe-73026809a2bb\" (UID: \"1a3c9534-c956-4a61-a9fe-73026809a2bb\") " Jan 26 15:56:34 crc kubenswrapper[4713]: I0126 15:56:34.165074 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a3c9534-c956-4a61-a9fe-73026809a2bb-combined-ca-bundle\") pod \"1a3c9534-c956-4a61-a9fe-73026809a2bb\" (UID: \"1a3c9534-c956-4a61-a9fe-73026809a2bb\") " Jan 26 15:56:34 crc kubenswrapper[4713]: I0126 15:56:34.171314 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a3c9534-c956-4a61-a9fe-73026809a2bb-kube-api-access-rqxr6" (OuterVolumeSpecName: "kube-api-access-rqxr6") pod "1a3c9534-c956-4a61-a9fe-73026809a2bb" (UID: "1a3c9534-c956-4a61-a9fe-73026809a2bb"). InnerVolumeSpecName "kube-api-access-rqxr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:34 crc kubenswrapper[4713]: I0126 15:56:34.184606 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a3c9534-c956-4a61-a9fe-73026809a2bb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1a3c9534-c956-4a61-a9fe-73026809a2bb" (UID: "1a3c9534-c956-4a61-a9fe-73026809a2bb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:34 crc kubenswrapper[4713]: I0126 15:56:34.194534 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a3c9534-c956-4a61-a9fe-73026809a2bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a3c9534-c956-4a61-a9fe-73026809a2bb" (UID: "1a3c9534-c956-4a61-a9fe-73026809a2bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:34 crc kubenswrapper[4713]: I0126 15:56:34.267764 4713 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a3c9534-c956-4a61-a9fe-73026809a2bb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:34 crc kubenswrapper[4713]: I0126 15:56:34.267801 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqxr6\" (UniqueName: \"kubernetes.io/projected/1a3c9534-c956-4a61-a9fe-73026809a2bb-kube-api-access-rqxr6\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:34 crc kubenswrapper[4713]: I0126 15:56:34.267817 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a3c9534-c956-4a61-a9fe-73026809a2bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:34 crc kubenswrapper[4713]: I0126 15:56:34.647303 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5gr4" event={"ID":"1a3c9534-c956-4a61-a9fe-73026809a2bb","Type":"ContainerDied","Data":"5519e337556537ff3127642c960547e7c0e8bf75668d4946be029a3f0294106d"} Jan 26 15:56:34 crc kubenswrapper[4713]: I0126 15:56:34.647408 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5519e337556537ff3127642c960547e7c0e8bf75668d4946be029a3f0294106d" Jan 26 15:56:34 crc kubenswrapper[4713]: I0126 15:56:34.647546 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5gr4" Jan 26 15:56:34 crc kubenswrapper[4713]: I0126 15:56:34.647955 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:34 crc kubenswrapper[4713]: I0126 15:56:34.648009 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.431987 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-f98f767bd-dxj2n"] Jan 26 15:56:35 crc kubenswrapper[4713]: E0126 15:56:35.432849 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a3c9534-c956-4a61-a9fe-73026809a2bb" containerName="barbican-db-sync" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.432867 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a3c9534-c956-4a61-a9fe-73026809a2bb" containerName="barbican-db-sync" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.433106 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a3c9534-c956-4a61-a9fe-73026809a2bb" containerName="barbican-db-sync" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.434310 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.452192 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.452718 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.452955 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-gzg5n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.455963 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6c4f76bb9-7rdcn"] Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.458061 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.464920 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.496786 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66c4ca0-2422-43a4-b461-f7b0cd0becea-logs\") pod \"barbican-keystone-listener-f98f767bd-dxj2n\" (UID: \"f66c4ca0-2422-43a4-b461-f7b0cd0becea\") " pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.497077 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66c4ca0-2422-43a4-b461-f7b0cd0becea-combined-ca-bundle\") pod \"barbican-keystone-listener-f98f767bd-dxj2n\" (UID: \"f66c4ca0-2422-43a4-b461-f7b0cd0becea\") " pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.497217 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66c4ca0-2422-43a4-b461-f7b0cd0becea-config-data-custom\") pod \"barbican-keystone-listener-f98f767bd-dxj2n\" (UID: \"f66c4ca0-2422-43a4-b461-f7b0cd0becea\") " pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.497490 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66c4ca0-2422-43a4-b461-f7b0cd0becea-config-data\") pod \"barbican-keystone-listener-f98f767bd-dxj2n\" (UID: \"f66c4ca0-2422-43a4-b461-f7b0cd0becea\") " pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.497644 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7wwz\" (UniqueName: \"kubernetes.io/projected/f66c4ca0-2422-43a4-b461-f7b0cd0becea-kube-api-access-h7wwz\") pod \"barbican-keystone-listener-f98f767bd-dxj2n\" (UID: \"f66c4ca0-2422-43a4-b461-f7b0cd0becea\") " pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.511200 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c4f76bb9-7rdcn"] Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.547295 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-f98f767bd-dxj2n"] Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.599601 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66c4ca0-2422-43a4-b461-f7b0cd0becea-config-data\") pod \"barbican-keystone-listener-f98f767bd-dxj2n\" (UID: \"f66c4ca0-2422-43a4-b461-f7b0cd0becea\") " pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.600067 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7wwz\" (UniqueName: \"kubernetes.io/projected/f66c4ca0-2422-43a4-b461-f7b0cd0becea-kube-api-access-h7wwz\") pod \"barbican-keystone-listener-f98f767bd-dxj2n\" (UID: \"f66c4ca0-2422-43a4-b461-f7b0cd0becea\") " pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.600235 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf16ebc-72a3-4cd6-a314-0737b0252d95-logs\") pod \"barbican-worker-6c4f76bb9-7rdcn\" (UID: \"edf16ebc-72a3-4cd6-a314-0737b0252d95\") " pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.600340 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf16ebc-72a3-4cd6-a314-0737b0252d95-combined-ca-bundle\") pod \"barbican-worker-6c4f76bb9-7rdcn\" (UID: \"edf16ebc-72a3-4cd6-a314-0737b0252d95\") " pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.600443 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66c4ca0-2422-43a4-b461-f7b0cd0becea-logs\") pod \"barbican-keystone-listener-f98f767bd-dxj2n\" (UID: \"f66c4ca0-2422-43a4-b461-f7b0cd0becea\") " pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.600521 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66c4ca0-2422-43a4-b461-f7b0cd0becea-combined-ca-bundle\") pod \"barbican-keystone-listener-f98f767bd-dxj2n\" (UID: \"f66c4ca0-2422-43a4-b461-f7b0cd0becea\") " pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.600599 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66c4ca0-2422-43a4-b461-f7b0cd0becea-config-data-custom\") pod \"barbican-keystone-listener-f98f767bd-dxj2n\" (UID: \"f66c4ca0-2422-43a4-b461-f7b0cd0becea\") " pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.600676 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edf16ebc-72a3-4cd6-a314-0737b0252d95-config-data-custom\") pod \"barbican-worker-6c4f76bb9-7rdcn\" (UID: \"edf16ebc-72a3-4cd6-a314-0737b0252d95\") " pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.600757 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm6v5\" (UniqueName: \"kubernetes.io/projected/edf16ebc-72a3-4cd6-a314-0737b0252d95-kube-api-access-bm6v5\") pod \"barbican-worker-6c4f76bb9-7rdcn\" (UID: \"edf16ebc-72a3-4cd6-a314-0737b0252d95\") " pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.600835 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf16ebc-72a3-4cd6-a314-0737b0252d95-config-data\") pod \"barbican-worker-6c4f76bb9-7rdcn\" (UID: \"edf16ebc-72a3-4cd6-a314-0737b0252d95\") " pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.601408 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66c4ca0-2422-43a4-b461-f7b0cd0becea-logs\") pod \"barbican-keystone-listener-f98f767bd-dxj2n\" (UID: \"f66c4ca0-2422-43a4-b461-f7b0cd0becea\") " pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.618166 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66c4ca0-2422-43a4-b461-f7b0cd0becea-combined-ca-bundle\") pod \"barbican-keystone-listener-f98f767bd-dxj2n\" (UID: \"f66c4ca0-2422-43a4-b461-f7b0cd0becea\") " pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.630780 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66c4ca0-2422-43a4-b461-f7b0cd0becea-config-data-custom\") pod \"barbican-keystone-listener-f98f767bd-dxj2n\" (UID: \"f66c4ca0-2422-43a4-b461-f7b0cd0becea\") " pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.640589 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66c4ca0-2422-43a4-b461-f7b0cd0becea-config-data\") pod \"barbican-keystone-listener-f98f767bd-dxj2n\" (UID: \"f66c4ca0-2422-43a4-b461-f7b0cd0becea\") " pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.640675 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-glnzf"] Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.641394 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7wwz\" (UniqueName: \"kubernetes.io/projected/f66c4ca0-2422-43a4-b461-f7b0cd0becea-kube-api-access-h7wwz\") pod \"barbican-keystone-listener-f98f767bd-dxj2n\" (UID: \"f66c4ca0-2422-43a4-b461-f7b0cd0becea\") " pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.661649 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ff8449c8c-tjj5v"] Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.664187 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.674334 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ff8449c8c-tjj5v"] Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.678104 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" podUID="23a6554a-422b-4fb1-a6c6-e99368e2b129" containerName="dnsmasq-dns" containerID="cri-o://47186be8e371abfd58579a55641771c689f57161c91fe3344935533e40ab24a7" gracePeriod=10 Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.704456 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf16ebc-72a3-4cd6-a314-0737b0252d95-logs\") pod \"barbican-worker-6c4f76bb9-7rdcn\" (UID: \"edf16ebc-72a3-4cd6-a314-0737b0252d95\") " pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.704530 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf16ebc-72a3-4cd6-a314-0737b0252d95-combined-ca-bundle\") pod \"barbican-worker-6c4f76bb9-7rdcn\" (UID: \"edf16ebc-72a3-4cd6-a314-0737b0252d95\") " pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.704569 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edf16ebc-72a3-4cd6-a314-0737b0252d95-config-data-custom\") pod \"barbican-worker-6c4f76bb9-7rdcn\" (UID: \"edf16ebc-72a3-4cd6-a314-0737b0252d95\") " pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.704595 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm6v5\" (UniqueName: \"kubernetes.io/projected/edf16ebc-72a3-4cd6-a314-0737b0252d95-kube-api-access-bm6v5\") pod \"barbican-worker-6c4f76bb9-7rdcn\" (UID: \"edf16ebc-72a3-4cd6-a314-0737b0252d95\") " pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.704626 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf16ebc-72a3-4cd6-a314-0737b0252d95-config-data\") pod \"barbican-worker-6c4f76bb9-7rdcn\" (UID: \"edf16ebc-72a3-4cd6-a314-0737b0252d95\") " pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.716934 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf16ebc-72a3-4cd6-a314-0737b0252d95-logs\") pod \"barbican-worker-6c4f76bb9-7rdcn\" (UID: \"edf16ebc-72a3-4cd6-a314-0737b0252d95\") " pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.717197 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf16ebc-72a3-4cd6-a314-0737b0252d95-config-data\") pod \"barbican-worker-6c4f76bb9-7rdcn\" (UID: \"edf16ebc-72a3-4cd6-a314-0737b0252d95\") " pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.719127 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf16ebc-72a3-4cd6-a314-0737b0252d95-combined-ca-bundle\") pod \"barbican-worker-6c4f76bb9-7rdcn\" (UID: \"edf16ebc-72a3-4cd6-a314-0737b0252d95\") " pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.723979 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edf16ebc-72a3-4cd6-a314-0737b0252d95-config-data-custom\") pod \"barbican-worker-6c4f76bb9-7rdcn\" (UID: \"edf16ebc-72a3-4cd6-a314-0737b0252d95\") " pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.743897 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm6v5\" (UniqueName: \"kubernetes.io/projected/edf16ebc-72a3-4cd6-a314-0737b0252d95-kube-api-access-bm6v5\") pod \"barbican-worker-6c4f76bb9-7rdcn\" (UID: \"edf16ebc-72a3-4cd6-a314-0737b0252d95\") " pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.760991 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5965d4d6c4-8lvw4"] Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.762983 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.774424 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.793438 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5965d4d6c4-8lvw4"] Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.826163 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-dns-svc\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.826284 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-config\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.826325 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.826435 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.826502 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqb8j\" (UniqueName: \"kubernetes.io/projected/8dd08527-0793-4933-bcc1-780d121ece65-kube-api-access-xqb8j\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.826847 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.832027 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.896865 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c4f76bb9-7rdcn" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.940164 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.940423 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx6s2\" (UniqueName: \"kubernetes.io/projected/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-kube-api-access-cx6s2\") pod \"barbican-api-5965d4d6c4-8lvw4\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.940494 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-config-data-custom\") pod \"barbican-api-5965d4d6c4-8lvw4\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.940686 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-config-data\") pod \"barbican-api-5965d4d6c4-8lvw4\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.940755 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-combined-ca-bundle\") pod \"barbican-api-5965d4d6c4-8lvw4\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.941089 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-dns-svc\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.941203 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-config\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.941280 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.941513 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.941637 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqb8j\" (UniqueName: \"kubernetes.io/projected/8dd08527-0793-4933-bcc1-780d121ece65-kube-api-access-xqb8j\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.941653 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.941743 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-logs\") pod \"barbican-api-5965d4d6c4-8lvw4\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.945476 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-config\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.952225 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.943647 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-dns-svc\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.953913 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:35 crc kubenswrapper[4713]: I0126 15:56:35.981827 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqb8j\" (UniqueName: \"kubernetes.io/projected/8dd08527-0793-4933-bcc1-780d121ece65-kube-api-access-xqb8j\") pod \"dnsmasq-dns-5ff8449c8c-tjj5v\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.046582 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx6s2\" (UniqueName: \"kubernetes.io/projected/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-kube-api-access-cx6s2\") pod \"barbican-api-5965d4d6c4-8lvw4\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.046645 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-config-data-custom\") pod \"barbican-api-5965d4d6c4-8lvw4\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.046703 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-config-data\") pod \"barbican-api-5965d4d6c4-8lvw4\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.046734 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-combined-ca-bundle\") pod \"barbican-api-5965d4d6c4-8lvw4\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.046913 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-logs\") pod \"barbican-api-5965d4d6c4-8lvw4\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.047479 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-logs\") pod \"barbican-api-5965d4d6c4-8lvw4\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.052917 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-combined-ca-bundle\") pod \"barbican-api-5965d4d6c4-8lvw4\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.055020 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-config-data-custom\") pod \"barbican-api-5965d4d6c4-8lvw4\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.061308 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-config-data\") pod \"barbican-api-5965d4d6c4-8lvw4\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.077781 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx6s2\" (UniqueName: \"kubernetes.io/projected/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-kube-api-access-cx6s2\") pod \"barbican-api-5965d4d6c4-8lvw4\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.268052 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.286186 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.733639 4713 generic.go:334] "Generic (PLEG): container finished" podID="23a6554a-422b-4fb1-a6c6-e99368e2b129" containerID="47186be8e371abfd58579a55641771c689f57161c91fe3344935533e40ab24a7" exitCode=0 Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.733944 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" event={"ID":"23a6554a-422b-4fb1-a6c6-e99368e2b129","Type":"ContainerDied","Data":"47186be8e371abfd58579a55641771c689f57161c91fe3344935533e40ab24a7"} Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.733981 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" event={"ID":"23a6554a-422b-4fb1-a6c6-e99368e2b129","Type":"ContainerDied","Data":"caa0124994d657c783765e1846f9bb2e9e228ac753f2430405c3f575ce8d1a12"} Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.733996 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caa0124994d657c783765e1846f9bb2e9e228ac753f2430405c3f575ce8d1a12" Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.837163 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-f98f767bd-dxj2n"] Jan 26 15:56:36 crc kubenswrapper[4713]: I0126 15:56:36.958031 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.046933 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c4f76bb9-7rdcn"] Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.076979 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7254\" (UniqueName: \"kubernetes.io/projected/23a6554a-422b-4fb1-a6c6-e99368e2b129-kube-api-access-n7254\") pod \"23a6554a-422b-4fb1-a6c6-e99368e2b129\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.077054 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-config\") pod \"23a6554a-422b-4fb1-a6c6-e99368e2b129\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.077101 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-ovsdbserver-sb\") pod \"23a6554a-422b-4fb1-a6c6-e99368e2b129\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.077162 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-ovsdbserver-nb\") pod \"23a6554a-422b-4fb1-a6c6-e99368e2b129\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.077263 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-dns-svc\") pod \"23a6554a-422b-4fb1-a6c6-e99368e2b129\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.077350 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-dns-swift-storage-0\") pod \"23a6554a-422b-4fb1-a6c6-e99368e2b129\" (UID: \"23a6554a-422b-4fb1-a6c6-e99368e2b129\") " Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.116919 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a6554a-422b-4fb1-a6c6-e99368e2b129-kube-api-access-n7254" (OuterVolumeSpecName: "kube-api-access-n7254") pod "23a6554a-422b-4fb1-a6c6-e99368e2b129" (UID: "23a6554a-422b-4fb1-a6c6-e99368e2b129"). InnerVolumeSpecName "kube-api-access-n7254". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.163408 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23a6554a-422b-4fb1-a6c6-e99368e2b129" (UID: "23a6554a-422b-4fb1-a6c6-e99368e2b129"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.180219 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7254\" (UniqueName: \"kubernetes.io/projected/23a6554a-422b-4fb1-a6c6-e99368e2b129-kube-api-access-n7254\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.180636 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.184204 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23a6554a-422b-4fb1-a6c6-e99368e2b129" (UID: "23a6554a-422b-4fb1-a6c6-e99368e2b129"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.188670 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "23a6554a-422b-4fb1-a6c6-e99368e2b129" (UID: "23a6554a-422b-4fb1-a6c6-e99368e2b129"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.246184 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23a6554a-422b-4fb1-a6c6-e99368e2b129" (UID: "23a6554a-422b-4fb1-a6c6-e99368e2b129"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.264125 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-config" (OuterVolumeSpecName: "config") pod "23a6554a-422b-4fb1-a6c6-e99368e2b129" (UID: "23a6554a-422b-4fb1-a6c6-e99368e2b129"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.283269 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.283313 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.283328 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.283342 4713 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23a6554a-422b-4fb1-a6c6-e99368e2b129-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.324757 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ff8449c8c-tjj5v"] Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.455932 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5965d4d6c4-8lvw4"] Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.771614 4713 generic.go:334] "Generic (PLEG): container finished" podID="8dd08527-0793-4933-bcc1-780d121ece65" containerID="98d188b72bec6a629944f665d87ccb692578ed8f10bf97ca84e57af542468341" exitCode=0 Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.773104 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" event={"ID":"8dd08527-0793-4933-bcc1-780d121ece65","Type":"ContainerDied","Data":"98d188b72bec6a629944f665d87ccb692578ed8f10bf97ca84e57af542468341"} Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.773175 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" event={"ID":"8dd08527-0793-4933-bcc1-780d121ece65","Type":"ContainerStarted","Data":"fc26845ecf377d3f51607eb82b00cfe3dd636b5db478f17513de6d75e80f0016"} Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.782073 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c4f76bb9-7rdcn" event={"ID":"edf16ebc-72a3-4cd6-a314-0737b0252d95","Type":"ContainerStarted","Data":"379822225a93c6b062794a636078a1f987707e012ce6d1a42a1b8b93938456de"} Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.799421 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" event={"ID":"f66c4ca0-2422-43a4-b461-f7b0cd0becea","Type":"ContainerStarted","Data":"66074f3dcb3e467142495724750d835e897819721c9844bb7ff6e71687e5b202"} Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.827972 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-glnzf" Jan 26 15:56:37 crc kubenswrapper[4713]: I0126 15:56:37.836063 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5965d4d6c4-8lvw4" event={"ID":"c8d7d58d-d78f-4623-a9dc-5c4fd0077607","Type":"ContainerStarted","Data":"bb7749ba0966c25105b86b8ad95955d55a4ba789db67193d8cba406d44c1a626"} Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.030435 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-glnzf"] Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.047867 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-glnzf"] Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.862836 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" event={"ID":"8dd08527-0793-4933-bcc1-780d121ece65","Type":"ContainerStarted","Data":"8f4a846453e701c54c627d5f23226526867a6be178f1046e312fd831a9dbe401"} Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.863505 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.864346 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-59866b8478-b6cbm"] Jan 26 15:56:38 crc kubenswrapper[4713]: E0126 15:56:38.864945 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a6554a-422b-4fb1-a6c6-e99368e2b129" containerName="init" Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.864958 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a6554a-422b-4fb1-a6c6-e99368e2b129" containerName="init" Jan 26 15:56:38 crc kubenswrapper[4713]: E0126 15:56:38.864982 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a6554a-422b-4fb1-a6c6-e99368e2b129" containerName="dnsmasq-dns" Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.864990 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a6554a-422b-4fb1-a6c6-e99368e2b129" containerName="dnsmasq-dns" Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.865200 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a6554a-422b-4fb1-a6c6-e99368e2b129" containerName="dnsmasq-dns" Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.866325 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.866675 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5965d4d6c4-8lvw4" event={"ID":"c8d7d58d-d78f-4623-a9dc-5c4fd0077607","Type":"ContainerStarted","Data":"b5a54cc25f0cc72f8941b209af70a1a8b1cc73d063c296b799c7ed5b3c9ce557"} Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.871913 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.872106 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xfq6j" event={"ID":"67bee733-1013-44d9-ac74-5ce552dbb606","Type":"ContainerStarted","Data":"e7498744938e3b926090ff7b4b1fe982879ec31e8947acdc0b852a42383e08ff"} Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.872806 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.881075 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59866b8478-b6cbm"] Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.930829 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" podStartSLOduration=3.930807076 podStartE2EDuration="3.930807076s" podCreationTimestamp="2026-01-26 15:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:38.915818395 +0000 UTC m=+1374.052835630" watchObservedRunningTime="2026-01-26 15:56:38.930807076 +0000 UTC m=+1374.067824311" Jan 26 15:56:38 crc kubenswrapper[4713]: I0126 15:56:38.955198 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-xfq6j" podStartSLOduration=4.686972672 podStartE2EDuration="1m25.95517927s" podCreationTimestamp="2026-01-26 15:55:13 +0000 UTC" firstStartedPulling="2026-01-26 15:55:15.460850584 +0000 UTC m=+1290.597867819" lastFinishedPulling="2026-01-26 15:56:36.729057182 +0000 UTC m=+1371.866074417" observedRunningTime="2026-01-26 15:56:38.934596792 +0000 UTC m=+1374.071614027" watchObservedRunningTime="2026-01-26 15:56:38.95517927 +0000 UTC m=+1374.092196505" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.021291 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd4zs\" (UniqueName: \"kubernetes.io/projected/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-kube-api-access-bd4zs\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.021346 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-internal-tls-certs\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.021519 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-combined-ca-bundle\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.021539 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-logs\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.021562 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-config-data\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.021585 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-public-tls-certs\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.021625 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-config-data-custom\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.123822 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-logs\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.123871 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-combined-ca-bundle\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.123896 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-config-data\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.123912 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-public-tls-certs\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.123940 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-config-data-custom\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.123990 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd4zs\" (UniqueName: \"kubernetes.io/projected/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-kube-api-access-bd4zs\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.124012 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-internal-tls-certs\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.124307 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-logs\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.130904 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-public-tls-certs\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.131942 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-config-data\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.141054 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-config-data-custom\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.141783 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-combined-ca-bundle\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.160091 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd4zs\" (UniqueName: \"kubernetes.io/projected/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-kube-api-access-bd4zs\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.174220 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a611ae0d-da10-46d8-8520-0a3dd75e1d1c-internal-tls-certs\") pod \"barbican-api-59866b8478-b6cbm\" (UID: \"a611ae0d-da10-46d8-8520-0a3dd75e1d1c\") " pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.196390 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:39 crc kubenswrapper[4713]: I0126 15:56:39.827992 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23a6554a-422b-4fb1-a6c6-e99368e2b129" path="/var/lib/kubelet/pods/23a6554a-422b-4fb1-a6c6-e99368e2b129/volumes" Jan 26 15:56:40 crc kubenswrapper[4713]: I0126 15:56:40.902803 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c4f76bb9-7rdcn" event={"ID":"edf16ebc-72a3-4cd6-a314-0737b0252d95","Type":"ContainerStarted","Data":"640c8996f09424f178d967c930fffe4e09eb2a4da67602cd8755307d9ad44a0c"} Jan 26 15:56:40 crc kubenswrapper[4713]: I0126 15:56:40.905967 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" event={"ID":"f66c4ca0-2422-43a4-b461-f7b0cd0becea","Type":"ContainerStarted","Data":"8a47cb38b466e2430ebbb961af855032cdc20e2a21e5a4891c012d619e4830be"} Jan 26 15:56:40 crc kubenswrapper[4713]: I0126 15:56:40.908957 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5965d4d6c4-8lvw4" event={"ID":"c8d7d58d-d78f-4623-a9dc-5c4fd0077607","Type":"ContainerStarted","Data":"019f0e226ccb91844092267cf655de8349e2d0522fffcc24fe214b808a219965"} Jan 26 15:56:40 crc kubenswrapper[4713]: I0126 15:56:40.910462 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:40 crc kubenswrapper[4713]: I0126 15:56:40.910622 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:40 crc kubenswrapper[4713]: I0126 15:56:40.930702 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5965d4d6c4-8lvw4" podStartSLOduration=5.9306779370000005 podStartE2EDuration="5.930677937s" podCreationTimestamp="2026-01-26 15:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:40.927258401 +0000 UTC m=+1376.064275646" watchObservedRunningTime="2026-01-26 15:56:40.930677937 +0000 UTC m=+1376.067695172" Jan 26 15:56:41 crc kubenswrapper[4713]: I0126 15:56:41.028164 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59866b8478-b6cbm"] Jan 26 15:56:42 crc kubenswrapper[4713]: I0126 15:56:42.880895 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:42 crc kubenswrapper[4713]: I0126 15:56:42.897210 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.139078 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-649645b98f-x7rkr"] Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.139616 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-649645b98f-x7rkr" podUID="ae985b0b-b47a-4084-904c-e7b10ad3ad76" containerName="neutron-api" containerID="cri-o://e15aac916f031b21242a43b9fcf8a0a5d520031395e5d5d46d3c72531d566d70" gracePeriod=30 Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.141015 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-649645b98f-x7rkr" podUID="ae985b0b-b47a-4084-904c-e7b10ad3ad76" containerName="neutron-httpd" containerID="cri-o://458739c55da0b0808d87285bb5f34dcd76ee12612ddf0d6d3277564b0bba017b" gracePeriod=30 Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.196989 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-587f599955-5k56n"] Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.199036 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.256898 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-587f599955-5k56n"] Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.324070 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-httpd-config\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.324169 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-config\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.328241 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-ovndb-tls-certs\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.341693 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-public-tls-certs\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.341828 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-internal-tls-certs\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.341915 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx52k\" (UniqueName: \"kubernetes.io/projected/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-kube-api-access-qx52k\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.341962 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-combined-ca-bundle\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.446805 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-config\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.446871 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-ovndb-tls-certs\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.446986 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-public-tls-certs\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.447023 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-internal-tls-certs\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.447052 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx52k\" (UniqueName: \"kubernetes.io/projected/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-kube-api-access-qx52k\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.447078 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-combined-ca-bundle\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.447131 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-httpd-config\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.458871 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-httpd-config\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.472240 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-public-tls-certs\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.473330 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-combined-ca-bundle\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.478277 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-config\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.479460 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-ovndb-tls-certs\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.481863 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-internal-tls-certs\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.495440 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx52k\" (UniqueName: \"kubernetes.io/projected/df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a-kube-api-access-qx52k\") pod \"neutron-587f599955-5k56n\" (UID: \"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a\") " pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.524089 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.552159 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-649645b98f-x7rkr" podUID="ae985b0b-b47a-4084-904c-e7b10ad3ad76" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.166:9696/\": read tcp 10.217.0.2:49284->10.217.0.166:9696: read: connection reset by peer" Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.975138 4713 generic.go:334] "Generic (PLEG): container finished" podID="5c67f072-d970-466d-a3c7-20df7968e5f2" containerID="076a4eb5adb1b98c64b8bdac5b8dd9c53fa332d03927d59c78a995aaf3a4c5cc" exitCode=0 Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.975222 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-zhp42" event={"ID":"5c67f072-d970-466d-a3c7-20df7968e5f2","Type":"ContainerDied","Data":"076a4eb5adb1b98c64b8bdac5b8dd9c53fa332d03927d59c78a995aaf3a4c5cc"} Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.977929 4713 generic.go:334] "Generic (PLEG): container finished" podID="ae985b0b-b47a-4084-904c-e7b10ad3ad76" containerID="458739c55da0b0808d87285bb5f34dcd76ee12612ddf0d6d3277564b0bba017b" exitCode=0 Jan 26 15:56:43 crc kubenswrapper[4713]: I0126 15:56:43.977981 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-649645b98f-x7rkr" event={"ID":"ae985b0b-b47a-4084-904c-e7b10ad3ad76","Type":"ContainerDied","Data":"458739c55da0b0808d87285bb5f34dcd76ee12612ddf0d6d3277564b0bba017b"} Jan 26 15:56:44 crc kubenswrapper[4713]: W0126 15:56:44.732063 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda611ae0d_da10_46d8_8520_0a3dd75e1d1c.slice/crio-acb63a04f2449f6c06da4d250a34c7dc59a5566ca7b740da37521c0e7b7ddad7 WatchSource:0}: Error finding container acb63a04f2449f6c06da4d250a34c7dc59a5566ca7b740da37521c0e7b7ddad7: Status 404 returned error can't find the container with id acb63a04f2449f6c06da4d250a34c7dc59a5566ca7b740da37521c0e7b7ddad7 Jan 26 15:56:44 crc kubenswrapper[4713]: I0126 15:56:44.887118 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-649645b98f-x7rkr" podUID="ae985b0b-b47a-4084-904c-e7b10ad3ad76" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.166:9696/\": dial tcp 10.217.0.166:9696: connect: connection refused" Jan 26 15:56:44 crc kubenswrapper[4713]: I0126 15:56:44.990108 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" event={"ID":"f66c4ca0-2422-43a4-b461-f7b0cd0becea","Type":"ContainerStarted","Data":"19c74e573c812d6659dec457d5c15f97369658b736dc735e94a14381015ac679"} Jan 26 15:56:44 crc kubenswrapper[4713]: I0126 15:56:44.991397 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59866b8478-b6cbm" event={"ID":"a611ae0d-da10-46d8-8520-0a3dd75e1d1c","Type":"ContainerStarted","Data":"acb63a04f2449f6c06da4d250a34c7dc59a5566ca7b740da37521c0e7b7ddad7"} Jan 26 15:56:44 crc kubenswrapper[4713]: I0126 15:56:44.993661 4713 generic.go:334] "Generic (PLEG): container finished" podID="c8a35a5b-49a1-45aa-9090-2aab8a4893ce" containerID="ed6ab4f817e2e10891a8f9cb34536e375e46daf715bdad954c2881a2c8dc5a84" exitCode=0 Jan 26 15:56:44 crc kubenswrapper[4713]: I0126 15:56:44.993707 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nkt8b" event={"ID":"c8a35a5b-49a1-45aa-9090-2aab8a4893ce","Type":"ContainerDied","Data":"ed6ab4f817e2e10891a8f9cb34536e375e46daf715bdad954c2881a2c8dc5a84"} Jan 26 15:56:45 crc kubenswrapper[4713]: I0126 15:56:45.008821 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-f98f767bd-dxj2n" podStartSLOduration=6.392509691 podStartE2EDuration="10.008799192s" podCreationTimestamp="2026-01-26 15:56:35 +0000 UTC" firstStartedPulling="2026-01-26 15:56:36.857648555 +0000 UTC m=+1371.994665790" lastFinishedPulling="2026-01-26 15:56:40.473938056 +0000 UTC m=+1375.610955291" observedRunningTime="2026-01-26 15:56:45.00835931 +0000 UTC m=+1380.145376545" watchObservedRunningTime="2026-01-26 15:56:45.008799192 +0000 UTC m=+1380.145816427" Jan 26 15:56:46 crc kubenswrapper[4713]: I0126 15:56:46.278591 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:56:46 crc kubenswrapper[4713]: I0126 15:56:46.481895 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-zwlt9"] Jan 26 15:56:46 crc kubenswrapper[4713]: I0126 15:56:46.482130 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" podUID="580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb" containerName="dnsmasq-dns" containerID="cri-o://bfa389dee1f67f72bd6e523abe4fab279ac5b08bf16bb151b68a049fe02a52b3" gracePeriod=10 Jan 26 15:56:47 crc kubenswrapper[4713]: I0126 15:56:47.017938 4713 generic.go:334] "Generic (PLEG): container finished" podID="580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb" containerID="bfa389dee1f67f72bd6e523abe4fab279ac5b08bf16bb151b68a049fe02a52b3" exitCode=0 Jan 26 15:56:47 crc kubenswrapper[4713]: I0126 15:56:47.018052 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" event={"ID":"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb","Type":"ContainerDied","Data":"bfa389dee1f67f72bd6e523abe4fab279ac5b08bf16bb151b68a049fe02a52b3"} Jan 26 15:56:47 crc kubenswrapper[4713]: I0126 15:56:47.979645 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.009954 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.026804 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-scripts\") pod \"5c67f072-d970-466d-a3c7-20df7968e5f2\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.027095 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-scripts\") pod \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.027164 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-config-data\") pod \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.027257 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5c67f072-d970-466d-a3c7-20df7968e5f2-certs\") pod \"5c67f072-d970-466d-a3c7-20df7968e5f2\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.027330 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-combined-ca-bundle\") pod \"5c67f072-d970-466d-a3c7-20df7968e5f2\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.027506 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-combined-ca-bundle\") pod \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.027646 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-etc-machine-id\") pod \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.027689 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc2hn\" (UniqueName: \"kubernetes.io/projected/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-kube-api-access-lc2hn\") pod \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.027807 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-db-sync-config-data\") pod \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\" (UID: \"c8a35a5b-49a1-45aa-9090-2aab8a4893ce\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.027902 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-config-data\") pod \"5c67f072-d970-466d-a3c7-20df7968e5f2\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.027994 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4m7t\" (UniqueName: \"kubernetes.io/projected/5c67f072-d970-466d-a3c7-20df7968e5f2-kube-api-access-t4m7t\") pod \"5c67f072-d970-466d-a3c7-20df7968e5f2\" (UID: \"5c67f072-d970-466d-a3c7-20df7968e5f2\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.044491 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c8a35a5b-49a1-45aa-9090-2aab8a4893ce" (UID: "c8a35a5b-49a1-45aa-9090-2aab8a4893ce"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.103069 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-scripts" (OuterVolumeSpecName: "scripts") pod "5c67f072-d970-466d-a3c7-20df7968e5f2" (UID: "5c67f072-d970-466d-a3c7-20df7968e5f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.120983 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c67f072-d970-466d-a3c7-20df7968e5f2" (UID: "5c67f072-d970-466d-a3c7-20df7968e5f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.138394 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.138427 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.138438 4713 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.138918 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c67f072-d970-466d-a3c7-20df7968e5f2-kube-api-access-t4m7t" (OuterVolumeSpecName: "kube-api-access-t4m7t") pod "5c67f072-d970-466d-a3c7-20df7968e5f2" (UID: "5c67f072-d970-466d-a3c7-20df7968e5f2"). InnerVolumeSpecName "kube-api-access-t4m7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.138959 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-scripts" (OuterVolumeSpecName: "scripts") pod "c8a35a5b-49a1-45aa-9090-2aab8a4893ce" (UID: "c8a35a5b-49a1-45aa-9090-2aab8a4893ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.138985 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-kube-api-access-lc2hn" (OuterVolumeSpecName: "kube-api-access-lc2hn") pod "c8a35a5b-49a1-45aa-9090-2aab8a4893ce" (UID: "c8a35a5b-49a1-45aa-9090-2aab8a4893ce"). InnerVolumeSpecName "kube-api-access-lc2hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.139791 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c8a35a5b-49a1-45aa-9090-2aab8a4893ce" (UID: "c8a35a5b-49a1-45aa-9090-2aab8a4893ce"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.158142 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nkt8b" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.158137 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nkt8b" event={"ID":"c8a35a5b-49a1-45aa-9090-2aab8a4893ce","Type":"ContainerDied","Data":"63f5d65f2588e8b9fd24482c1ddd8ed4644ad78f455fe6f8e9a989838ea29049"} Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.158424 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63f5d65f2588e8b9fd24482c1ddd8ed4644ad78f455fe6f8e9a989838ea29049" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.167125 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8a35a5b-49a1-45aa-9090-2aab8a4893ce" (UID: "c8a35a5b-49a1-45aa-9090-2aab8a4893ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.168505 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c67f072-d970-466d-a3c7-20df7968e5f2-certs" (OuterVolumeSpecName: "certs") pod "5c67f072-d970-466d-a3c7-20df7968e5f2" (UID: "5c67f072-d970-466d-a3c7-20df7968e5f2"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.170997 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-zhp42" event={"ID":"5c67f072-d970-466d-a3c7-20df7968e5f2","Type":"ContainerDied","Data":"3e846b3330a0a6cdc527f167f350729e77b91a2b3ca9f9deef8c83cc770550c0"} Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.171039 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e846b3330a0a6cdc527f167f350729e77b91a2b3ca9f9deef8c83cc770550c0" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.171146 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-zhp42" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.173844 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-config-data" (OuterVolumeSpecName: "config-data") pod "5c67f072-d970-466d-a3c7-20df7968e5f2" (UID: "5c67f072-d970-466d-a3c7-20df7968e5f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.178874 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c4f76bb9-7rdcn" event={"ID":"edf16ebc-72a3-4cd6-a314-0737b0252d95","Type":"ContainerStarted","Data":"b5c1cd10d23630983bdd7e6e17adee9be4dee5a0056c83d6bd0879493b81d83e"} Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.244471 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.244520 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc2hn\" (UniqueName: \"kubernetes.io/projected/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-kube-api-access-lc2hn\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.244554 4713 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.244569 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c67f072-d970-466d-a3c7-20df7968e5f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.244580 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4m7t\" (UniqueName: \"kubernetes.io/projected/5c67f072-d970-466d-a3c7-20df7968e5f2-kube-api-access-t4m7t\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.244592 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.244624 4713 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5c67f072-d970-466d-a3c7-20df7968e5f2-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.264510 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-config-data" (OuterVolumeSpecName: "config-data") pod "c8a35a5b-49a1-45aa-9090-2aab8a4893ce" (UID: "c8a35a5b-49a1-45aa-9090-2aab8a4893ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.347822 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8a35a5b-49a1-45aa-9090-2aab8a4893ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.402586 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6c4f76bb9-7rdcn" podStartSLOduration=9.983555084 podStartE2EDuration="13.402561192s" podCreationTimestamp="2026-01-26 15:56:35 +0000 UTC" firstStartedPulling="2026-01-26 15:56:37.055131013 +0000 UTC m=+1372.192148248" lastFinishedPulling="2026-01-26 15:56:40.474137121 +0000 UTC m=+1375.611154356" observedRunningTime="2026-01-26 15:56:48.21063636 +0000 UTC m=+1383.347653605" watchObservedRunningTime="2026-01-26 15:56:48.402561192 +0000 UTC m=+1383.539578437" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.405704 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-587f599955-5k56n"] Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.466457 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.502387 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.559302 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-config\") pod \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.561957 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2w49\" (UniqueName: \"kubernetes.io/projected/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-kube-api-access-n2w49\") pod \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.562008 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-ovsdbserver-nb\") pod \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.562061 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-dns-svc\") pod \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.562128 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-ovsdbserver-sb\") pod \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\" (UID: \"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb\") " Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.576205 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-kube-api-access-n2w49" (OuterVolumeSpecName: "kube-api-access-n2w49") pod "580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb" (UID: "580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb"). InnerVolumeSpecName "kube-api-access-n2w49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.666121 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2w49\" (UniqueName: \"kubernetes.io/projected/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-kube-api-access-n2w49\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.750711 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb" (UID: "580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.769026 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.770894 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-config" (OuterVolumeSpecName: "config") pod "580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb" (UID: "580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.784590 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb" (UID: "580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.805900 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb" (UID: "580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.863480 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.877853 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.877883 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:48 crc kubenswrapper[4713]: I0126 15:56:48.877909 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.127177 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-lm4cs"] Jan 26 15:56:49 crc kubenswrapper[4713]: E0126 15:56:49.127855 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a35a5b-49a1-45aa-9090-2aab8a4893ce" containerName="cinder-db-sync" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.127868 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a35a5b-49a1-45aa-9090-2aab8a4893ce" containerName="cinder-db-sync" Jan 26 15:56:49 crc kubenswrapper[4713]: E0126 15:56:49.127881 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb" containerName="dnsmasq-dns" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.127887 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb" containerName="dnsmasq-dns" Jan 26 15:56:49 crc kubenswrapper[4713]: E0126 15:56:49.127902 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb" containerName="init" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.127908 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb" containerName="init" Jan 26 15:56:49 crc kubenswrapper[4713]: E0126 15:56:49.127931 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c67f072-d970-466d-a3c7-20df7968e5f2" containerName="cloudkitty-db-sync" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.127937 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c67f072-d970-466d-a3c7-20df7968e5f2" containerName="cloudkitty-db-sync" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.128126 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c67f072-d970-466d-a3c7-20df7968e5f2" containerName="cloudkitty-db-sync" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.128139 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8a35a5b-49a1-45aa-9090-2aab8a4893ce" containerName="cinder-db-sync" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.128154 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb" containerName="dnsmasq-dns" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.130635 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.135067 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.135096 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.135335 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.135448 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.135543 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-kbfj7" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.143642 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-lm4cs"] Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.187910 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-config-data\") pod \"cloudkitty-storageinit-lm4cs\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.188050 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx79x\" (UniqueName: \"kubernetes.io/projected/889cf7db-25b0-4afa-8daa-351dbd2dffe8-kube-api-access-hx79x\") pod \"cloudkitty-storageinit-lm4cs\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.188246 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-scripts\") pod \"cloudkitty-storageinit-lm4cs\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.188298 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/889cf7db-25b0-4afa-8daa-351dbd2dffe8-certs\") pod \"cloudkitty-storageinit-lm4cs\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.188432 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-combined-ca-bundle\") pod \"cloudkitty-storageinit-lm4cs\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.225002 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59866b8478-b6cbm" event={"ID":"a611ae0d-da10-46d8-8520-0a3dd75e1d1c","Type":"ContainerStarted","Data":"376b708baf4257fef67a0e377dbd36e65da193d9b5827d055654f2c25d5ac53b"} Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.225054 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59866b8478-b6cbm" event={"ID":"a611ae0d-da10-46d8-8520-0a3dd75e1d1c","Type":"ContainerStarted","Data":"cecb34c310d0e01d65a548802bfcf072584c6a3795cdaf01678e81d0776c76e0"} Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.225175 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.241725 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" event={"ID":"580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb","Type":"ContainerDied","Data":"8e25e75b6d4a329906bbd1684c543d5739de8c362f413b188d752192dda63208"} Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.241795 4713 scope.go:117] "RemoveContainer" containerID="bfa389dee1f67f72bd6e523abe4fab279ac5b08bf16bb151b68a049fe02a52b3" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.241972 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-zwlt9" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.261117 4713 generic.go:334] "Generic (PLEG): container finished" podID="ae985b0b-b47a-4084-904c-e7b10ad3ad76" containerID="e15aac916f031b21242a43b9fcf8a0a5d520031395e5d5d46d3c72531d566d70" exitCode=0 Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.261187 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-649645b98f-x7rkr" event={"ID":"ae985b0b-b47a-4084-904c-e7b10ad3ad76","Type":"ContainerDied","Data":"e15aac916f031b21242a43b9fcf8a0a5d520031395e5d5d46d3c72531d566d70"} Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.268583 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-59866b8478-b6cbm" podStartSLOduration=11.268556079 podStartE2EDuration="11.268556079s" podCreationTimestamp="2026-01-26 15:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:49.254181045 +0000 UTC m=+1384.391198280" watchObservedRunningTime="2026-01-26 15:56:49.268556079 +0000 UTC m=+1384.405573314" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.277133 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-587f599955-5k56n" event={"ID":"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a","Type":"ContainerStarted","Data":"6b914c3771459197a5ea4c9f1f6431c7b796319cad234f27a707928ec0923030"} Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.277189 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-587f599955-5k56n" event={"ID":"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a","Type":"ContainerStarted","Data":"c468b49ffe35c9d8c0a3553dc77c6070edbf407df15c53381023c06035273973"} Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.277204 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-587f599955-5k56n" event={"ID":"df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a","Type":"ContainerStarted","Data":"0d2b4754266e7a5377d0120454fca10150569c63d24e6c68d7d74e302cc4d9a1"} Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.278259 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-587f599955-5k56n" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.291517 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx79x\" (UniqueName: \"kubernetes.io/projected/889cf7db-25b0-4afa-8daa-351dbd2dffe8-kube-api-access-hx79x\") pod \"cloudkitty-storageinit-lm4cs\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.291711 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-scripts\") pod \"cloudkitty-storageinit-lm4cs\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.291760 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/889cf7db-25b0-4afa-8daa-351dbd2dffe8-certs\") pod \"cloudkitty-storageinit-lm4cs\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.291853 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-combined-ca-bundle\") pod \"cloudkitty-storageinit-lm4cs\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.291950 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-config-data\") pod \"cloudkitty-storageinit-lm4cs\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.293341 4713 scope.go:117] "RemoveContainer" containerID="b954c4161600132224a2cb89107634452a13db90e3ee3f81c8110a825c8bbe16" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.311755 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-config-data\") pod \"cloudkitty-storageinit-lm4cs\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.311822 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="ceilometer-central-agent" containerID="cri-o://248ada6bfa1f1c144e649100bc121acb023ce451181c1d6af73c19256c7014ca" gracePeriod=30 Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.311906 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d","Type":"ContainerStarted","Data":"3ef1c0f725c7a4cf506ba15fc10fe7fc08a84a24ee12294f06386a0c043b7401"} Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.311928 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="proxy-httpd" containerID="cri-o://3ef1c0f725c7a4cf506ba15fc10fe7fc08a84a24ee12294f06386a0c043b7401" gracePeriod=30 Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.311968 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="sg-core" containerID="cri-o://50a5f91b480d40ee19ed35ffb2b70d952e0c5c385f05527b772d012043befff2" gracePeriod=30 Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.312003 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="ceilometer-notification-agent" containerID="cri-o://4b2bb803f267d70ea0cbe153d58ec8178d95b9c82a8c3ed96d903b08161de957" gracePeriod=30 Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.312270 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.316007 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-scripts\") pod \"cloudkitty-storageinit-lm4cs\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.320657 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/889cf7db-25b0-4afa-8daa-351dbd2dffe8-certs\") pod \"cloudkitty-storageinit-lm4cs\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.332065 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-combined-ca-bundle\") pod \"cloudkitty-storageinit-lm4cs\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.353718 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-zwlt9"] Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.357191 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx79x\" (UniqueName: \"kubernetes.io/projected/889cf7db-25b0-4afa-8daa-351dbd2dffe8-kube-api-access-hx79x\") pod \"cloudkitty-storageinit-lm4cs\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.398131 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-zwlt9"] Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.437449 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.4238017210000002 podStartE2EDuration="1m15.437426113s" podCreationTimestamp="2026-01-26 15:55:34 +0000 UTC" firstStartedPulling="2026-01-26 15:55:36.119936243 +0000 UTC m=+1311.256953478" lastFinishedPulling="2026-01-26 15:56:48.133560635 +0000 UTC m=+1383.270577870" observedRunningTime="2026-01-26 15:56:49.436784565 +0000 UTC m=+1384.573801800" watchObservedRunningTime="2026-01-26 15:56:49.437426113 +0000 UTC m=+1384.574443348" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.466610 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.497279 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.499615 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.513497 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7nxks" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.513928 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.514155 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.516177 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-587f599955-5k56n" podStartSLOduration=6.516161575 podStartE2EDuration="6.516161575s" podCreationTimestamp="2026-01-26 15:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:49.49393818 +0000 UTC m=+1384.630955415" watchObservedRunningTime="2026-01-26 15:56:49.516161575 +0000 UTC m=+1384.653178810" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.549279 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.588488 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.634066 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18790311-f08f-4785-b9df-ba3baf764010-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.634652 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l5f8\" (UniqueName: \"kubernetes.io/projected/18790311-f08f-4785-b9df-ba3baf764010-kube-api-access-4l5f8\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.634702 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-config-data\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.634813 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.634836 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-scripts\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.634860 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.694818 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c68459c4c-j9whv"] Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.696709 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.736613 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-ovsdbserver-sb\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.736669 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-dns-swift-storage-0\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.736734 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.736763 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-scripts\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.736789 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jgs\" (UniqueName: \"kubernetes.io/projected/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-kube-api-access-45jgs\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.736806 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.736851 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-dns-svc\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.736919 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18790311-f08f-4785-b9df-ba3baf764010-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.736938 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l5f8\" (UniqueName: \"kubernetes.io/projected/18790311-f08f-4785-b9df-ba3baf764010-kube-api-access-4l5f8\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.736957 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-config-data\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.737019 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-config\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.737064 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-ovsdbserver-nb\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.738679 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18790311-f08f-4785-b9df-ba3baf764010-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.740503 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c68459c4c-j9whv"] Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.751410 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.758226 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-scripts\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.763610 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-config-data\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.774072 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.904250 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-ovsdbserver-sb\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.904905 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-dns-swift-storage-0\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.905109 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45jgs\" (UniqueName: \"kubernetes.io/projected/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-kube-api-access-45jgs\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.905255 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-dns-svc\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.905805 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-config\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.905920 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-ovsdbserver-nb\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.909069 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-dns-swift-storage-0\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.909696 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-dns-svc\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.909999 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-ovsdbserver-sb\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.910260 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-ovsdbserver-nb\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.910704 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-config\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.921441 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l5f8\" (UniqueName: \"kubernetes.io/projected/18790311-f08f-4785-b9df-ba3baf764010-kube-api-access-4l5f8\") pod \"cinder-scheduler-0\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " pod="openstack/cinder-scheduler-0" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.965202 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45jgs\" (UniqueName: \"kubernetes.io/projected/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-kube-api-access-45jgs\") pod \"dnsmasq-dns-7c68459c4c-j9whv\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:49 crc kubenswrapper[4713]: I0126 15:56:49.971740 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb" path="/var/lib/kubelet/pods/580f609d-fa6c-4fc4-aa29-4af8c6f9cfcb/volumes" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.066082 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.067958 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.079877 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.087862 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.105001 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.169019 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.262837 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-config-data\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.263627 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-config-data-custom\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.263738 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86023328-687b-4b40-ab36-68044003a0ac-logs\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.263799 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h57p\" (UniqueName: \"kubernetes.io/projected/86023328-687b-4b40-ab36-68044003a0ac-kube-api-access-9h57p\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.263881 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-scripts\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.263973 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86023328-687b-4b40-ab36-68044003a0ac-etc-machine-id\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.264844 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.367852 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.367973 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-config-data\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.368067 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-config-data-custom\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.368120 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86023328-687b-4b40-ab36-68044003a0ac-logs\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.368149 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h57p\" (UniqueName: \"kubernetes.io/projected/86023328-687b-4b40-ab36-68044003a0ac-kube-api-access-9h57p\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.368183 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-scripts\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.368219 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86023328-687b-4b40-ab36-68044003a0ac-etc-machine-id\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.368318 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86023328-687b-4b40-ab36-68044003a0ac-etc-machine-id\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.370745 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86023328-687b-4b40-ab36-68044003a0ac-logs\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.387262 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-config-data-custom\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.399021 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-config-data\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.404956 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.412227 4713 generic.go:334] "Generic (PLEG): container finished" podID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerID="50a5f91b480d40ee19ed35ffb2b70d952e0c5c385f05527b772d012043befff2" exitCode=2 Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.412342 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d","Type":"ContainerDied","Data":"50a5f91b480d40ee19ed35ffb2b70d952e0c5c385f05527b772d012043befff2"} Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.413210 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-scripts\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.415041 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h57p\" (UniqueName: \"kubernetes.io/projected/86023328-687b-4b40-ab36-68044003a0ac-kube-api-access-9h57p\") pod \"cinder-api-0\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.427753 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.445986 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.590001 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-combined-ca-bundle\") pod \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.590384 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-httpd-config\") pod \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.590475 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-internal-tls-certs\") pod \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.590529 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-public-tls-certs\") pod \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.590612 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-ovndb-tls-certs\") pod \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.590664 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-config\") pod \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.590776 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwqkf\" (UniqueName: \"kubernetes.io/projected/ae985b0b-b47a-4084-904c-e7b10ad3ad76-kube-api-access-hwqkf\") pod \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\" (UID: \"ae985b0b-b47a-4084-904c-e7b10ad3ad76\") " Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.603245 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae985b0b-b47a-4084-904c-e7b10ad3ad76-kube-api-access-hwqkf" (OuterVolumeSpecName: "kube-api-access-hwqkf") pod "ae985b0b-b47a-4084-904c-e7b10ad3ad76" (UID: "ae985b0b-b47a-4084-904c-e7b10ad3ad76"). InnerVolumeSpecName "kube-api-access-hwqkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.604890 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ae985b0b-b47a-4084-904c-e7b10ad3ad76" (UID: "ae985b0b-b47a-4084-904c-e7b10ad3ad76"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.678488 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-lm4cs"] Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.694717 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwqkf\" (UniqueName: \"kubernetes.io/projected/ae985b0b-b47a-4084-904c-e7b10ad3ad76-kube-api-access-hwqkf\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.694756 4713 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.696534 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.731562 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ae985b0b-b47a-4084-904c-e7b10ad3ad76" (UID: "ae985b0b-b47a-4084-904c-e7b10ad3ad76"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.743891 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae985b0b-b47a-4084-904c-e7b10ad3ad76" (UID: "ae985b0b-b47a-4084-904c-e7b10ad3ad76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.754947 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ae985b0b-b47a-4084-904c-e7b10ad3ad76" (UID: "ae985b0b-b47a-4084-904c-e7b10ad3ad76"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.786784 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ae985b0b-b47a-4084-904c-e7b10ad3ad76" (UID: "ae985b0b-b47a-4084-904c-e7b10ad3ad76"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.798883 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.798922 4713 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.798937 4713 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.798948 4713 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.815891 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-config" (OuterVolumeSpecName: "config") pod "ae985b0b-b47a-4084-904c-e7b10ad3ad76" (UID: "ae985b0b-b47a-4084-904c-e7b10ad3ad76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.959618 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae985b0b-b47a-4084-904c-e7b10ad3ad76-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:50 crc kubenswrapper[4713]: I0126 15:56:50.984715 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.004014 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c68459c4c-j9whv"] Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.363047 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:56:51 crc kubenswrapper[4713]: W0126 15:56:51.367544 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86023328_687b_4b40_ab36_68044003a0ac.slice/crio-f4f9336c246f42f920ec31db7b23a2aa87045e80494bf03fe1854567c9c32baa WatchSource:0}: Error finding container f4f9336c246f42f920ec31db7b23a2aa87045e80494bf03fe1854567c9c32baa: Status 404 returned error can't find the container with id f4f9336c246f42f920ec31db7b23a2aa87045e80494bf03fe1854567c9c32baa Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.474867 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-lm4cs" event={"ID":"889cf7db-25b0-4afa-8daa-351dbd2dffe8","Type":"ContainerStarted","Data":"15260fdb61fe922d7a3e4ac66956ad3d99063b560c58ab4fde03cd84fa57e7f3"} Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.475124 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-lm4cs" event={"ID":"889cf7db-25b0-4afa-8daa-351dbd2dffe8","Type":"ContainerStarted","Data":"4b73e3ba37a279b289c6f71a7f67f5c56dac7523b44c317af35fe615c8fc4038"} Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.484641 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86023328-687b-4b40-ab36-68044003a0ac","Type":"ContainerStarted","Data":"f4f9336c246f42f920ec31db7b23a2aa87045e80494bf03fe1854567c9c32baa"} Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.509851 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-lm4cs" podStartSLOduration=2.509832502 podStartE2EDuration="2.509832502s" podCreationTimestamp="2026-01-26 15:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:51.497861605 +0000 UTC m=+1386.634878850" watchObservedRunningTime="2026-01-26 15:56:51.509832502 +0000 UTC m=+1386.646849737" Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.535401 4713 generic.go:334] "Generic (PLEG): container finished" podID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerID="3ef1c0f725c7a4cf506ba15fc10fe7fc08a84a24ee12294f06386a0c043b7401" exitCode=0 Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.535440 4713 generic.go:334] "Generic (PLEG): container finished" podID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerID="4b2bb803f267d70ea0cbe153d58ec8178d95b9c82a8c3ed96d903b08161de957" exitCode=0 Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.535449 4713 generic.go:334] "Generic (PLEG): container finished" podID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerID="248ada6bfa1f1c144e649100bc121acb023ce451181c1d6af73c19256c7014ca" exitCode=0 Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.535516 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d","Type":"ContainerDied","Data":"3ef1c0f725c7a4cf506ba15fc10fe7fc08a84a24ee12294f06386a0c043b7401"} Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.535541 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d","Type":"ContainerDied","Data":"4b2bb803f267d70ea0cbe153d58ec8178d95b9c82a8c3ed96d903b08161de957"} Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.535551 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d","Type":"ContainerDied","Data":"248ada6bfa1f1c144e649100bc121acb023ce451181c1d6af73c19256c7014ca"} Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.546717 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" event={"ID":"fe76f66e-59c6-426c-8fa0-b3003bf0b6da","Type":"ContainerStarted","Data":"f288a5673d4a4a3c2445160ec073ffc7736d3ba62afce0e4116ce9f17c7bd66b"} Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.546767 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" event={"ID":"fe76f66e-59c6-426c-8fa0-b3003bf0b6da","Type":"ContainerStarted","Data":"57393f8b3da039d0b4525d23bb3f0146357114e5ce1561a130f0e0be1b5f1162"} Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.592718 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-649645b98f-x7rkr" event={"ID":"ae985b0b-b47a-4084-904c-e7b10ad3ad76","Type":"ContainerDied","Data":"75a537f936136c3e39a41f73d0c670729f8f142e30a72bc78a65c476b888a922"} Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.592984 4713 scope.go:117] "RemoveContainer" containerID="458739c55da0b0808d87285bb5f34dcd76ee12612ddf0d6d3277564b0bba017b" Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.593012 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-649645b98f-x7rkr" Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.601165 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"18790311-f08f-4785-b9df-ba3baf764010","Type":"ContainerStarted","Data":"6862901dc413c6499f1f8abad46596cbc005df44df7f790bc9fe572e0b4dad5b"} Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.658904 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-649645b98f-x7rkr"] Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.708287 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-649645b98f-x7rkr"] Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.739490 4713 scope.go:117] "RemoveContainer" containerID="e15aac916f031b21242a43b9fcf8a0a5d520031395e5d5d46d3c72531d566d70" Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.751656 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.810197 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-scripts\") pod \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.810269 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhwwp\" (UniqueName: \"kubernetes.io/projected/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-kube-api-access-jhwwp\") pod \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.810320 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-log-httpd\") pod \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.810431 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-config-data\") pod \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.810594 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-combined-ca-bundle\") pod \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.810652 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-run-httpd\") pod \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.810679 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-sg-core-conf-yaml\") pod \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\" (UID: \"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d\") " Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.812825 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" (UID: "f7d8d9b2-3166-4607-abfe-cc612f9c9e4d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.817745 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" (UID: "f7d8d9b2-3166-4607-abfe-cc612f9c9e4d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.820058 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-kube-api-access-jhwwp" (OuterVolumeSpecName: "kube-api-access-jhwwp") pod "f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" (UID: "f7d8d9b2-3166-4607-abfe-cc612f9c9e4d"). InnerVolumeSpecName "kube-api-access-jhwwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.831807 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-scripts" (OuterVolumeSpecName: "scripts") pod "f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" (UID: "f7d8d9b2-3166-4607-abfe-cc612f9c9e4d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.921306 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae985b0b-b47a-4084-904c-e7b10ad3ad76" path="/var/lib/kubelet/pods/ae985b0b-b47a-4084-904c-e7b10ad3ad76/volumes" Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.926334 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.926403 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhwwp\" (UniqueName: \"kubernetes.io/projected/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-kube-api-access-jhwwp\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.926422 4713 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.926436 4713 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:51 crc kubenswrapper[4713]: I0126 15:56:51.960598 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" (UID: "f7d8d9b2-3166-4607-abfe-cc612f9c9e4d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.030870 4713 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.031546 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" (UID: "f7d8d9b2-3166-4607-abfe-cc612f9c9e4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.055215 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-config-data" (OuterVolumeSpecName: "config-data") pod "f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" (UID: "f7d8d9b2-3166-4607-abfe-cc612f9c9e4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.105792 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.132866 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.132926 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.645937 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7d8d9b2-3166-4607-abfe-cc612f9c9e4d","Type":"ContainerDied","Data":"ee7669c995bb95ec938c2c096582d412cb6ad1393296eeb513e6a242776385c6"} Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.647352 4713 scope.go:117] "RemoveContainer" containerID="3ef1c0f725c7a4cf506ba15fc10fe7fc08a84a24ee12294f06386a0c043b7401" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.646648 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.654034 4713 generic.go:334] "Generic (PLEG): container finished" podID="fe76f66e-59c6-426c-8fa0-b3003bf0b6da" containerID="f288a5673d4a4a3c2445160ec073ffc7736d3ba62afce0e4116ce9f17c7bd66b" exitCode=0 Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.654711 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" event={"ID":"fe76f66e-59c6-426c-8fa0-b3003bf0b6da","Type":"ContainerDied","Data":"f288a5673d4a4a3c2445160ec073ffc7736d3ba62afce0e4116ce9f17c7bd66b"} Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.675581 4713 scope.go:117] "RemoveContainer" containerID="50a5f91b480d40ee19ed35ffb2b70d952e0c5c385f05527b772d012043befff2" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.745517 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.747878 4713 scope.go:117] "RemoveContainer" containerID="4b2bb803f267d70ea0cbe153d58ec8178d95b9c82a8c3ed96d903b08161de957" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.763039 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.783282 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:56:52 crc kubenswrapper[4713]: E0126 15:56:52.783867 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae985b0b-b47a-4084-904c-e7b10ad3ad76" containerName="neutron-api" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.783892 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae985b0b-b47a-4084-904c-e7b10ad3ad76" containerName="neutron-api" Jan 26 15:56:52 crc kubenswrapper[4713]: E0126 15:56:52.783909 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="ceilometer-notification-agent" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.783917 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="ceilometer-notification-agent" Jan 26 15:56:52 crc kubenswrapper[4713]: E0126 15:56:52.783938 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="sg-core" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.783947 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="sg-core" Jan 26 15:56:52 crc kubenswrapper[4713]: E0126 15:56:52.783968 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="ceilometer-central-agent" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.783976 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="ceilometer-central-agent" Jan 26 15:56:52 crc kubenswrapper[4713]: E0126 15:56:52.784009 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="proxy-httpd" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.784017 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="proxy-httpd" Jan 26 15:56:52 crc kubenswrapper[4713]: E0126 15:56:52.784027 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae985b0b-b47a-4084-904c-e7b10ad3ad76" containerName="neutron-httpd" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.784035 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae985b0b-b47a-4084-904c-e7b10ad3ad76" containerName="neutron-httpd" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.784283 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="proxy-httpd" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.784310 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="ceilometer-central-agent" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.784337 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="sg-core" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.784349 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" containerName="ceilometer-notification-agent" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.784430 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae985b0b-b47a-4084-904c-e7b10ad3ad76" containerName="neutron-api" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.784445 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae985b0b-b47a-4084-904c-e7b10ad3ad76" containerName="neutron-httpd" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.787285 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.792067 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.796793 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.796952 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.818241 4713 scope.go:117] "RemoveContainer" containerID="248ada6bfa1f1c144e649100bc121acb023ce451181c1d6af73c19256c7014ca" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.854600 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-config-data\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.854667 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-run-httpd\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.854685 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.854704 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-scripts\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.854742 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-log-httpd\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.854765 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.854795 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdxwx\" (UniqueName: \"kubernetes.io/projected/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-kube-api-access-pdxwx\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.957346 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-config-data\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.957479 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-run-httpd\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.957500 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.957548 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-scripts\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.957602 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-log-httpd\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.957657 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.957722 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdxwx\" (UniqueName: \"kubernetes.io/projected/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-kube-api-access-pdxwx\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.958935 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-run-httpd\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.959881 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-log-httpd\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.968897 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.977816 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-config-data\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.980563 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.984464 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdxwx\" (UniqueName: \"kubernetes.io/projected/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-kube-api-access-pdxwx\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:52 crc kubenswrapper[4713]: I0126 15:56:52.997396 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-scripts\") pod \"ceilometer-0\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " pod="openstack/ceilometer-0" Jan 26 15:56:53 crc kubenswrapper[4713]: I0126 15:56:53.134023 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:56:53 crc kubenswrapper[4713]: I0126 15:56:53.671132 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" event={"ID":"fe76f66e-59c6-426c-8fa0-b3003bf0b6da","Type":"ContainerStarted","Data":"27b1e01150ce0ee638b504ea460a47f1526a7d749dcf2feab72ec813bb5d37f1"} Jan 26 15:56:53 crc kubenswrapper[4713]: I0126 15:56:53.671547 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:56:53 crc kubenswrapper[4713]: I0126 15:56:53.673267 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"18790311-f08f-4785-b9df-ba3baf764010","Type":"ContainerStarted","Data":"1c5186870d30250c65e100b679e0f494333c92e71175c3f256ed15fc0495874a"} Jan 26 15:56:53 crc kubenswrapper[4713]: I0126 15:56:53.679081 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86023328-687b-4b40-ab36-68044003a0ac","Type":"ContainerStarted","Data":"7704a26e8e11ca58a98a05eea5d6a6c882b43ab57dcbed69567763167a9d7ba3"} Jan 26 15:56:53 crc kubenswrapper[4713]: I0126 15:56:53.708904 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" podStartSLOduration=4.708882039 podStartE2EDuration="4.708882039s" podCreationTimestamp="2026-01-26 15:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:53.691026498 +0000 UTC m=+1388.828043753" watchObservedRunningTime="2026-01-26 15:56:53.708882039 +0000 UTC m=+1388.845899274" Jan 26 15:56:53 crc kubenswrapper[4713]: I0126 15:56:53.730658 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:56:53 crc kubenswrapper[4713]: W0126 15:56:53.762468 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c675f5b_7900_4ba5_baf3_7ff64bf3a2c6.slice/crio-4cd3c2db2abb77d0f51f17fe552f96b08089e0847ba4924979b7d81158f75597 WatchSource:0}: Error finding container 4cd3c2db2abb77d0f51f17fe552f96b08089e0847ba4924979b7d81158f75597: Status 404 returned error can't find the container with id 4cd3c2db2abb77d0f51f17fe552f96b08089e0847ba4924979b7d81158f75597 Jan 26 15:56:53 crc kubenswrapper[4713]: I0126 15:56:53.824929 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7d8d9b2-3166-4607-abfe-cc612f9c9e4d" path="/var/lib/kubelet/pods/f7d8d9b2-3166-4607-abfe-cc612f9c9e4d/volumes" Jan 26 15:56:54 crc kubenswrapper[4713]: I0126 15:56:54.697455 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6","Type":"ContainerStarted","Data":"4cd3c2db2abb77d0f51f17fe552f96b08089e0847ba4924979b7d81158f75597"} Jan 26 15:56:55 crc kubenswrapper[4713]: I0126 15:56:55.711788 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6","Type":"ContainerStarted","Data":"5e883bb238f72009a179e25f3cf3f01df40f37532bf429489a7814d1be58f054"} Jan 26 15:56:55 crc kubenswrapper[4713]: I0126 15:56:55.712435 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6","Type":"ContainerStarted","Data":"f453747697c311ff4228199277fa77815194ea55007f5ee432753d6e678e17e5"} Jan 26 15:56:55 crc kubenswrapper[4713]: I0126 15:56:55.714659 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"18790311-f08f-4785-b9df-ba3baf764010","Type":"ContainerStarted","Data":"3f081b8a503dd3110100dec2e71367de73f05fe4b7b6884bc22205f384c66496"} Jan 26 15:56:55 crc kubenswrapper[4713]: I0126 15:56:55.717426 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86023328-687b-4b40-ab36-68044003a0ac","Type":"ContainerStarted","Data":"df2e40e20c96a6aa0cdaa14f07962cd67757fd523bd43ef617f17e602da3712f"} Jan 26 15:56:55 crc kubenswrapper[4713]: I0126 15:56:55.717685 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="86023328-687b-4b40-ab36-68044003a0ac" containerName="cinder-api-log" containerID="cri-o://7704a26e8e11ca58a98a05eea5d6a6c882b43ab57dcbed69567763167a9d7ba3" gracePeriod=30 Jan 26 15:56:55 crc kubenswrapper[4713]: I0126 15:56:55.718617 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="86023328-687b-4b40-ab36-68044003a0ac" containerName="cinder-api" containerID="cri-o://df2e40e20c96a6aa0cdaa14f07962cd67757fd523bd43ef617f17e602da3712f" gracePeriod=30 Jan 26 15:56:55 crc kubenswrapper[4713]: I0126 15:56:55.718646 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 15:56:55 crc kubenswrapper[4713]: I0126 15:56:55.741789 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.41515346 podStartE2EDuration="6.741759568s" podCreationTimestamp="2026-01-26 15:56:49 +0000 UTC" firstStartedPulling="2026-01-26 15:56:51.053949095 +0000 UTC m=+1386.190966330" lastFinishedPulling="2026-01-26 15:56:52.380555203 +0000 UTC m=+1387.517572438" observedRunningTime="2026-01-26 15:56:55.732621611 +0000 UTC m=+1390.869638846" watchObservedRunningTime="2026-01-26 15:56:55.741759568 +0000 UTC m=+1390.878776803" Jan 26 15:56:55 crc kubenswrapper[4713]: I0126 15:56:55.789435 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.789406356 podStartE2EDuration="5.789406356s" podCreationTimestamp="2026-01-26 15:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:55.773917681 +0000 UTC m=+1390.910934916" watchObservedRunningTime="2026-01-26 15:56:55.789406356 +0000 UTC m=+1390.926423591" Jan 26 15:56:55 crc kubenswrapper[4713]: I0126 15:56:55.939669 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:55 crc kubenswrapper[4713]: I0126 15:56:55.965981 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6bb4458d9d-r4dmr" Jan 26 15:56:56 crc kubenswrapper[4713]: I0126 15:56:56.773798 4713 generic.go:334] "Generic (PLEG): container finished" podID="86023328-687b-4b40-ab36-68044003a0ac" containerID="df2e40e20c96a6aa0cdaa14f07962cd67757fd523bd43ef617f17e602da3712f" exitCode=0 Jan 26 15:56:56 crc kubenswrapper[4713]: I0126 15:56:56.774250 4713 generic.go:334] "Generic (PLEG): container finished" podID="86023328-687b-4b40-ab36-68044003a0ac" containerID="7704a26e8e11ca58a98a05eea5d6a6c882b43ab57dcbed69567763167a9d7ba3" exitCode=143 Jan 26 15:56:56 crc kubenswrapper[4713]: I0126 15:56:56.774321 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86023328-687b-4b40-ab36-68044003a0ac","Type":"ContainerDied","Data":"df2e40e20c96a6aa0cdaa14f07962cd67757fd523bd43ef617f17e602da3712f"} Jan 26 15:56:56 crc kubenswrapper[4713]: I0126 15:56:56.774348 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86023328-687b-4b40-ab36-68044003a0ac","Type":"ContainerDied","Data":"7704a26e8e11ca58a98a05eea5d6a6c882b43ab57dcbed69567763167a9d7ba3"} Jan 26 15:56:56 crc kubenswrapper[4713]: I0126 15:56:56.803915 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6","Type":"ContainerStarted","Data":"e5351d0edef600e45544d3e2e26fec9d54e7b7f4ddfc49e3aed45ff766d10a7f"} Jan 26 15:56:56 crc kubenswrapper[4713]: I0126 15:56:56.955001 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.029621 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.105194 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-scripts\") pod \"86023328-687b-4b40-ab36-68044003a0ac\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.105315 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-config-data\") pod \"86023328-687b-4b40-ab36-68044003a0ac\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.105404 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-combined-ca-bundle\") pod \"86023328-687b-4b40-ab36-68044003a0ac\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.105445 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86023328-687b-4b40-ab36-68044003a0ac-logs\") pod \"86023328-687b-4b40-ab36-68044003a0ac\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.105471 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h57p\" (UniqueName: \"kubernetes.io/projected/86023328-687b-4b40-ab36-68044003a0ac-kube-api-access-9h57p\") pod \"86023328-687b-4b40-ab36-68044003a0ac\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.105488 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86023328-687b-4b40-ab36-68044003a0ac-etc-machine-id\") pod \"86023328-687b-4b40-ab36-68044003a0ac\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.105527 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-config-data-custom\") pod \"86023328-687b-4b40-ab36-68044003a0ac\" (UID: \"86023328-687b-4b40-ab36-68044003a0ac\") " Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.106326 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86023328-687b-4b40-ab36-68044003a0ac-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "86023328-687b-4b40-ab36-68044003a0ac" (UID: "86023328-687b-4b40-ab36-68044003a0ac"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.108017 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86023328-687b-4b40-ab36-68044003a0ac-logs" (OuterVolumeSpecName: "logs") pod "86023328-687b-4b40-ab36-68044003a0ac" (UID: "86023328-687b-4b40-ab36-68044003a0ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.114818 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "86023328-687b-4b40-ab36-68044003a0ac" (UID: "86023328-687b-4b40-ab36-68044003a0ac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.114966 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-scripts" (OuterVolumeSpecName: "scripts") pod "86023328-687b-4b40-ab36-68044003a0ac" (UID: "86023328-687b-4b40-ab36-68044003a0ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.115593 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86023328-687b-4b40-ab36-68044003a0ac-kube-api-access-9h57p" (OuterVolumeSpecName: "kube-api-access-9h57p") pod "86023328-687b-4b40-ab36-68044003a0ac" (UID: "86023328-687b-4b40-ab36-68044003a0ac"). InnerVolumeSpecName "kube-api-access-9h57p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.143920 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7d456999d-27w6v" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.181531 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86023328-687b-4b40-ab36-68044003a0ac" (UID: "86023328-687b-4b40-ab36-68044003a0ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.201815 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-config-data" (OuterVolumeSpecName: "config-data") pod "86023328-687b-4b40-ab36-68044003a0ac" (UID: "86023328-687b-4b40-ab36-68044003a0ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.209702 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.209735 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.209746 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.209757 4713 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86023328-687b-4b40-ab36-68044003a0ac-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.209766 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h57p\" (UniqueName: \"kubernetes.io/projected/86023328-687b-4b40-ab36-68044003a0ac-kube-api-access-9h57p\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.209775 4713 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86023328-687b-4b40-ab36-68044003a0ac-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.209783 4713 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86023328-687b-4b40-ab36-68044003a0ac-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.818327 4713 generic.go:334] "Generic (PLEG): container finished" podID="889cf7db-25b0-4afa-8daa-351dbd2dffe8" containerID="15260fdb61fe922d7a3e4ac66956ad3d99063b560c58ab4fde03cd84fa57e7f3" exitCode=0 Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.818614 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-lm4cs" event={"ID":"889cf7db-25b0-4afa-8daa-351dbd2dffe8","Type":"ContainerDied","Data":"15260fdb61fe922d7a3e4ac66956ad3d99063b560c58ab4fde03cd84fa57e7f3"} Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.823872 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86023328-687b-4b40-ab36-68044003a0ac","Type":"ContainerDied","Data":"f4f9336c246f42f920ec31db7b23a2aa87045e80494bf03fe1854567c9c32baa"} Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.823936 4713 scope.go:117] "RemoveContainer" containerID="df2e40e20c96a6aa0cdaa14f07962cd67757fd523bd43ef617f17e602da3712f" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.824087 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.828604 4713 generic.go:334] "Generic (PLEG): container finished" podID="67bee733-1013-44d9-ac74-5ce552dbb606" containerID="e7498744938e3b926090ff7b4b1fe982879ec31e8947acdc0b852a42383e08ff" exitCode=0 Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.828658 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xfq6j" event={"ID":"67bee733-1013-44d9-ac74-5ce552dbb606","Type":"ContainerDied","Data":"e7498744938e3b926090ff7b4b1fe982879ec31e8947acdc0b852a42383e08ff"} Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.959741 4713 scope.go:117] "RemoveContainer" containerID="7704a26e8e11ca58a98a05eea5d6a6c882b43ab57dcbed69567763167a9d7ba3" Jan 26 15:56:57 crc kubenswrapper[4713]: I0126 15:56:57.997944 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.028649 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.066414 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:56:58 crc kubenswrapper[4713]: E0126 15:56:58.066894 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86023328-687b-4b40-ab36-68044003a0ac" containerName="cinder-api" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.066916 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="86023328-687b-4b40-ab36-68044003a0ac" containerName="cinder-api" Jan 26 15:56:58 crc kubenswrapper[4713]: E0126 15:56:58.066931 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86023328-687b-4b40-ab36-68044003a0ac" containerName="cinder-api-log" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.066937 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="86023328-687b-4b40-ab36-68044003a0ac" containerName="cinder-api-log" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.067108 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="86023328-687b-4b40-ab36-68044003a0ac" containerName="cinder-api" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.067138 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="86023328-687b-4b40-ab36-68044003a0ac" containerName="cinder-api-log" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.068237 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.072887 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.073129 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.119586 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.135515 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.135735 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.135790 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-config-data-custom\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.135883 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a992c5f-9e04-4776-8603-5c9b4def66c7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.136130 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csmjn\" (UniqueName: \"kubernetes.io/projected/3a992c5f-9e04-4776-8603-5c9b4def66c7-kube-api-access-csmjn\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.141714 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.141872 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-config-data\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.141940 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-scripts\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.141998 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a992c5f-9e04-4776-8603-5c9b4def66c7-logs\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.192597 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59866b8478-b6cbm" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.200462 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.244868 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.244945 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.244974 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-config-data-custom\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.245048 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a992c5f-9e04-4776-8603-5c9b4def66c7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.245133 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csmjn\" (UniqueName: \"kubernetes.io/projected/3a992c5f-9e04-4776-8603-5c9b4def66c7-kube-api-access-csmjn\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.245161 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.245197 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-config-data\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.245227 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-scripts\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.245276 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a992c5f-9e04-4776-8603-5c9b4def66c7-logs\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.247493 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a992c5f-9e04-4776-8603-5c9b4def66c7-logs\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.252037 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a992c5f-9e04-4776-8603-5c9b4def66c7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.253131 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.255482 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-scripts\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.269022 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-config-data-custom\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.304037 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.335781 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-config-data\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.335976 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a992c5f-9e04-4776-8603-5c9b4def66c7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.340966 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csmjn\" (UniqueName: \"kubernetes.io/projected/3a992c5f-9e04-4776-8603-5c9b4def66c7-kube-api-access-csmjn\") pod \"cinder-api-0\" (UID: \"3a992c5f-9e04-4776-8603-5c9b4def66c7\") " pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.425512 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5965d4d6c4-8lvw4"] Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.425770 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5965d4d6c4-8lvw4" podUID="c8d7d58d-d78f-4623-a9dc-5c4fd0077607" containerName="barbican-api-log" containerID="cri-o://b5a54cc25f0cc72f8941b209af70a1a8b1cc73d063c296b799c7ed5b3c9ce557" gracePeriod=30 Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.426237 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5965d4d6c4-8lvw4" podUID="c8d7d58d-d78f-4623-a9dc-5c4fd0077607" containerName="barbican-api" containerID="cri-o://019f0e226ccb91844092267cf655de8349e2d0522fffcc24fe214b808a219965" gracePeriod=30 Jan 26 15:56:58 crc kubenswrapper[4713]: E0126 15:56:58.522247 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8d7d58d_d78f_4623_a9dc_5c4fd0077607.slice/crio-b5a54cc25f0cc72f8941b209af70a1a8b1cc73d063c296b799c7ed5b3c9ce557.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8d7d58d_d78f_4623_a9dc_5c4fd0077607.slice/crio-conmon-b5a54cc25f0cc72f8941b209af70a1a8b1cc73d063c296b799c7ed5b3c9ce557.scope\": RecentStats: unable to find data in memory cache]" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.542126 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.863737 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.865293 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.876244 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.876733 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.876928 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cbp9d" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.884513 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.915649 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6","Type":"ContainerStarted","Data":"45cf857e171f6445dd724e6b045541a45bfd6643449441d4952734261a764ce4"} Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.915760 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.939512 4713 generic.go:334] "Generic (PLEG): container finished" podID="c8d7d58d-d78f-4623-a9dc-5c4fd0077607" containerID="b5a54cc25f0cc72f8941b209af70a1a8b1cc73d063c296b799c7ed5b3c9ce557" exitCode=143 Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.939805 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5965d4d6c4-8lvw4" event={"ID":"c8d7d58d-d78f-4623-a9dc-5c4fd0077607","Type":"ContainerDied","Data":"b5a54cc25f0cc72f8941b209af70a1a8b1cc73d063c296b799c7ed5b3c9ce557"} Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.967639 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cgvr\" (UniqueName: \"kubernetes.io/projected/5ee23a80-20ad-45b5-9670-c165085175ab-kube-api-access-4cgvr\") pod \"openstackclient\" (UID: \"5ee23a80-20ad-45b5-9670-c165085175ab\") " pod="openstack/openstackclient" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.967788 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5ee23a80-20ad-45b5-9670-c165085175ab-openstack-config-secret\") pod \"openstackclient\" (UID: \"5ee23a80-20ad-45b5-9670-c165085175ab\") " pod="openstack/openstackclient" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.967961 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ee23a80-20ad-45b5-9670-c165085175ab-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5ee23a80-20ad-45b5-9670-c165085175ab\") " pod="openstack/openstackclient" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.968131 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5ee23a80-20ad-45b5-9670-c165085175ab-openstack-config\") pod \"openstackclient\" (UID: \"5ee23a80-20ad-45b5-9670-c165085175ab\") " pod="openstack/openstackclient" Jan 26 15:56:58 crc kubenswrapper[4713]: I0126 15:56:58.987804 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.637191988 podStartE2EDuration="6.987782637s" podCreationTimestamp="2026-01-26 15:56:52 +0000 UTC" firstStartedPulling="2026-01-26 15:56:53.769893423 +0000 UTC m=+1388.906910658" lastFinishedPulling="2026-01-26 15:56:58.120484082 +0000 UTC m=+1393.257501307" observedRunningTime="2026-01-26 15:56:58.939292255 +0000 UTC m=+1394.076309500" watchObservedRunningTime="2026-01-26 15:56:58.987782637 +0000 UTC m=+1394.124799872" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.070118 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ee23a80-20ad-45b5-9670-c165085175ab-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5ee23a80-20ad-45b5-9670-c165085175ab\") " pod="openstack/openstackclient" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.070208 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5ee23a80-20ad-45b5-9670-c165085175ab-openstack-config\") pod \"openstackclient\" (UID: \"5ee23a80-20ad-45b5-9670-c165085175ab\") " pod="openstack/openstackclient" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.070347 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cgvr\" (UniqueName: \"kubernetes.io/projected/5ee23a80-20ad-45b5-9670-c165085175ab-kube-api-access-4cgvr\") pod \"openstackclient\" (UID: \"5ee23a80-20ad-45b5-9670-c165085175ab\") " pod="openstack/openstackclient" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.070424 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5ee23a80-20ad-45b5-9670-c165085175ab-openstack-config-secret\") pod \"openstackclient\" (UID: \"5ee23a80-20ad-45b5-9670-c165085175ab\") " pod="openstack/openstackclient" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.073310 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5ee23a80-20ad-45b5-9670-c165085175ab-openstack-config\") pod \"openstackclient\" (UID: \"5ee23a80-20ad-45b5-9670-c165085175ab\") " pod="openstack/openstackclient" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.089233 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ee23a80-20ad-45b5-9670-c165085175ab-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5ee23a80-20ad-45b5-9670-c165085175ab\") " pod="openstack/openstackclient" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.090920 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5ee23a80-20ad-45b5-9670-c165085175ab-openstack-config-secret\") pod \"openstackclient\" (UID: \"5ee23a80-20ad-45b5-9670-c165085175ab\") " pod="openstack/openstackclient" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.102987 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cgvr\" (UniqueName: \"kubernetes.io/projected/5ee23a80-20ad-45b5-9670-c165085175ab-kube-api-access-4cgvr\") pod \"openstackclient\" (UID: \"5ee23a80-20ad-45b5-9670-c165085175ab\") " pod="openstack/openstackclient" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.207857 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.271128 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.785521 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.860881 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86023328-687b-4b40-ab36-68044003a0ac" path="/var/lib/kubelet/pods/86023328-687b-4b40-ab36-68044003a0ac/volumes" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.893952 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xfq6j" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.894675 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-config-data\") pod \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.894747 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-scripts\") pod \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.894833 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/889cf7db-25b0-4afa-8daa-351dbd2dffe8-certs\") pod \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.894901 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-combined-ca-bundle\") pod \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.894944 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx79x\" (UniqueName: \"kubernetes.io/projected/889cf7db-25b0-4afa-8daa-351dbd2dffe8-kube-api-access-hx79x\") pod \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\" (UID: \"889cf7db-25b0-4afa-8daa-351dbd2dffe8\") " Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.904017 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-scripts" (OuterVolumeSpecName: "scripts") pod "889cf7db-25b0-4afa-8daa-351dbd2dffe8" (UID: "889cf7db-25b0-4afa-8daa-351dbd2dffe8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.909250 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/889cf7db-25b0-4afa-8daa-351dbd2dffe8-kube-api-access-hx79x" (OuterVolumeSpecName: "kube-api-access-hx79x") pod "889cf7db-25b0-4afa-8daa-351dbd2dffe8" (UID: "889cf7db-25b0-4afa-8daa-351dbd2dffe8"). InnerVolumeSpecName "kube-api-access-hx79x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.932791 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/889cf7db-25b0-4afa-8daa-351dbd2dffe8-certs" (OuterVolumeSpecName: "certs") pod "889cf7db-25b0-4afa-8daa-351dbd2dffe8" (UID: "889cf7db-25b0-4afa-8daa-351dbd2dffe8"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.953264 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-config-data" (OuterVolumeSpecName: "config-data") pod "889cf7db-25b0-4afa-8daa-351dbd2dffe8" (UID: "889cf7db-25b0-4afa-8daa-351dbd2dffe8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:56:59 crc kubenswrapper[4713]: I0126 15:56:59.994514 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3a992c5f-9e04-4776-8603-5c9b4def66c7","Type":"ContainerStarted","Data":"66936f2b7cc55aac24cda27808f73ab387b7d37097734b517fec35dd0331e761"} Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.010909 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8626j\" (UniqueName: \"kubernetes.io/projected/67bee733-1013-44d9-ac74-5ce552dbb606-kube-api-access-8626j\") pod \"67bee733-1013-44d9-ac74-5ce552dbb606\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.010945 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-db-sync-config-data\") pod \"67bee733-1013-44d9-ac74-5ce552dbb606\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.011266 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-config-data\") pod \"67bee733-1013-44d9-ac74-5ce552dbb606\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.011305 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-combined-ca-bundle\") pod \"67bee733-1013-44d9-ac74-5ce552dbb606\" (UID: \"67bee733-1013-44d9-ac74-5ce552dbb606\") " Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.011826 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.011845 4713 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/889cf7db-25b0-4afa-8daa-351dbd2dffe8-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.011855 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx79x\" (UniqueName: \"kubernetes.io/projected/889cf7db-25b0-4afa-8daa-351dbd2dffe8-kube-api-access-hx79x\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.011867 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.017044 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-lm4cs" event={"ID":"889cf7db-25b0-4afa-8daa-351dbd2dffe8","Type":"ContainerDied","Data":"4b73e3ba37a279b289c6f71a7f67f5c56dac7523b44c317af35fe615c8fc4038"} Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.017095 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b73e3ba37a279b289c6f71a7f67f5c56dac7523b44c317af35fe615c8fc4038" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.017224 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-lm4cs" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.019574 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "67bee733-1013-44d9-ac74-5ce552dbb606" (UID: "67bee733-1013-44d9-ac74-5ce552dbb606"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.031536 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67bee733-1013-44d9-ac74-5ce552dbb606-kube-api-access-8626j" (OuterVolumeSpecName: "kube-api-access-8626j") pod "67bee733-1013-44d9-ac74-5ce552dbb606" (UID: "67bee733-1013-44d9-ac74-5ce552dbb606"). InnerVolumeSpecName "kube-api-access-8626j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.033204 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xfq6j" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.033402 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xfq6j" event={"ID":"67bee733-1013-44d9-ac74-5ce552dbb606","Type":"ContainerDied","Data":"418d1f896e82904c9a93c5b12ed7ea24cee2766cd39d7dcf85b6132ebf4604f2"} Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.033428 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="418d1f896e82904c9a93c5b12ed7ea24cee2766cd39d7dcf85b6132ebf4604f2" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.085989 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "889cf7db-25b0-4afa-8daa-351dbd2dffe8" (UID: "889cf7db-25b0-4afa-8daa-351dbd2dffe8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.090490 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.113496 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/889cf7db-25b0-4afa-8daa-351dbd2dffe8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.113531 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8626j\" (UniqueName: \"kubernetes.io/projected/67bee733-1013-44d9-ac74-5ce552dbb606-kube-api-access-8626j\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.113545 4713 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.124663 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 26 15:57:00 crc kubenswrapper[4713]: E0126 15:57:00.125245 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="889cf7db-25b0-4afa-8daa-351dbd2dffe8" containerName="cloudkitty-storageinit" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.125271 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="889cf7db-25b0-4afa-8daa-351dbd2dffe8" containerName="cloudkitty-storageinit" Jan 26 15:57:00 crc kubenswrapper[4713]: E0126 15:57:00.125284 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67bee733-1013-44d9-ac74-5ce552dbb606" containerName="glance-db-sync" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.125293 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="67bee733-1013-44d9-ac74-5ce552dbb606" containerName="glance-db-sync" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.125640 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="889cf7db-25b0-4afa-8daa-351dbd2dffe8" containerName="cloudkitty-storageinit" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.125670 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="67bee733-1013-44d9-ac74-5ce552dbb606" containerName="glance-db-sync" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.126550 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.129397 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.143339 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.148850 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67bee733-1013-44d9-ac74-5ce552dbb606" (UID: "67bee733-1013-44d9-ac74-5ce552dbb606"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.173160 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.185059 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.214555 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-config-data\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.214609 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.214679 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.214719 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f6l9\" (UniqueName: \"kubernetes.io/projected/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-kube-api-access-6f6l9\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.214753 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-certs\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.214872 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-scripts\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.214981 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.228776 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-config-data" (OuterVolumeSpecName: "config-data") pod "67bee733-1013-44d9-ac74-5ce552dbb606" (UID: "67bee733-1013-44d9-ac74-5ce552dbb606"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.254106 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c68459c4c-j9whv"] Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.316563 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f6l9\" (UniqueName: \"kubernetes.io/projected/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-kube-api-access-6f6l9\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.316620 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-certs\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.316706 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-scripts\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.316780 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-config-data\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.316807 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.316863 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.316922 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67bee733-1013-44d9-ac74-5ce552dbb606-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.328663 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.359656 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-scripts\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.360277 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-certs\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.380824 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-config-data\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.383166 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.405680 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f6l9\" (UniqueName: \"kubernetes.io/projected/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-kube-api-access-6f6l9\") pod \"cloudkitty-proc-0\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.477030 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86b78b4d8c-7v2hw"] Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.482052 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.512763 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.524491 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86b78b4d8c-7v2hw"] Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.549440 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-ovsdbserver-nb\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.549502 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpv4z\" (UniqueName: \"kubernetes.io/projected/168827e6-0812-4edd-8632-01b2c82937b7-kube-api-access-vpv4z\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.549603 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-dns-swift-storage-0\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.549641 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-ovsdbserver-sb\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.549706 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-config\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.549821 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-dns-svc\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.569965 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.572619 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.591427 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.591831 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.651196 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.662099 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-config\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.662191 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-dns-svc\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.662231 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-ovsdbserver-nb\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.662259 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpv4z\" (UniqueName: \"kubernetes.io/projected/168827e6-0812-4edd-8632-01b2c82937b7-kube-api-access-vpv4z\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.662317 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-dns-swift-storage-0\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.662344 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-ovsdbserver-sb\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.668692 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-dns-svc\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.671086 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-config\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.673457 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-dns-swift-storage-0\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.674166 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-ovsdbserver-nb\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.678667 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-ovsdbserver-sb\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.744298 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpv4z\" (UniqueName: \"kubernetes.io/projected/168827e6-0812-4edd-8632-01b2c82937b7-kube-api-access-vpv4z\") pod \"dnsmasq-dns-86b78b4d8c-7v2hw\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.764128 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt4m5\" (UniqueName: \"kubernetes.io/projected/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-kube-api-access-bt4m5\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.764216 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-logs\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.764262 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.764307 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-config-data\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.764384 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.764415 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-scripts\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.764431 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-certs\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.765091 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.867390 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt4m5\" (UniqueName: \"kubernetes.io/projected/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-kube-api-access-bt4m5\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.867457 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-logs\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.867497 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.867570 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-config-data\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.867640 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.867671 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-scripts\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.867686 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-certs\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.869233 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-logs\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.892168 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.893111 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-certs\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.893528 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.893669 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-scripts\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.896244 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-config-data\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:00 crc kubenswrapper[4713]: I0126 15:57:00.915557 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt4m5\" (UniqueName: \"kubernetes.io/projected/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-kube-api-access-bt4m5\") pod \"cloudkitty-api-0\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.074135 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3a992c5f-9e04-4776-8603-5c9b4def66c7","Type":"ContainerStarted","Data":"276a3bff7c7500fa26fe059eb6f6837033857c376981924174c71d4cc6bd471f"} Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.104538 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5ee23a80-20ad-45b5-9670-c165085175ab","Type":"ContainerStarted","Data":"d5c6aeca54ce9aafd5c53f254d08cd802e072cf02f7cf58c3def4e1ebe8d2b48"} Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.104910 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" podUID="fe76f66e-59c6-426c-8fa0-b3003bf0b6da" containerName="dnsmasq-dns" containerID="cri-o://27b1e01150ce0ee638b504ea460a47f1526a7d749dcf2feab72ec813bb5d37f1" gracePeriod=10 Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.152057 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.235145 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.485193 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86b78b4d8c-7v2hw"] Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.517205 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-2j86n"] Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.519969 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.539080 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.581458 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-2j86n"] Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.626441 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-config\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.626496 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.626558 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-dns-svc\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.626650 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.626765 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxrr7\" (UniqueName: \"kubernetes.io/projected/b1d0ef70-9f37-4d0c-b317-7100a193699e-kube-api-access-wxrr7\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.626799 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.727617 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-config\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.728191 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.728284 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-dns-svc\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.728449 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.728562 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxrr7\" (UniqueName: \"kubernetes.io/projected/b1d0ef70-9f37-4d0c-b317-7100a193699e-kube-api-access-wxrr7\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.728690 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.729061 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-config\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.729338 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.729372 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-dns-svc\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.730027 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.730125 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.784749 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxrr7\" (UniqueName: \"kubernetes.io/projected/b1d0ef70-9f37-4d0c-b317-7100a193699e-kube-api-access-wxrr7\") pod \"dnsmasq-dns-67bdc55879-2j86n\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.851787 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5965d4d6c4-8lvw4" podUID="c8d7d58d-d78f-4623-a9dc-5c4fd0077607" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.173:9311/healthcheck\": read tcp 10.217.0.2:54076->10.217.0.173:9311: read: connection reset by peer" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.851864 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5965d4d6c4-8lvw4" podUID="c8d7d58d-d78f-4623-a9dc-5c4fd0077607" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.173:9311/healthcheck\": read tcp 10.217.0.2:54086->10.217.0.173:9311: read: connection reset by peer" Jan 26 15:57:01 crc kubenswrapper[4713]: I0126 15:57:01.936892 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.040281 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86b78b4d8c-7v2hw"] Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.191641 4713 generic.go:334] "Generic (PLEG): container finished" podID="c8d7d58d-d78f-4623-a9dc-5c4fd0077607" containerID="019f0e226ccb91844092267cf655de8349e2d0522fffcc24fe214b808a219965" exitCode=0 Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.191738 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5965d4d6c4-8lvw4" event={"ID":"c8d7d58d-d78f-4623-a9dc-5c4fd0077607","Type":"ContainerDied","Data":"019f0e226ccb91844092267cf655de8349e2d0522fffcc24fe214b808a219965"} Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.204731 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-59866b8478-b6cbm" podUID="a611ae0d-da10-46d8-8520-0a3dd75e1d1c" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.174:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.237733 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.238288 4713 generic.go:334] "Generic (PLEG): container finished" podID="fe76f66e-59c6-426c-8fa0-b3003bf0b6da" containerID="27b1e01150ce0ee638b504ea460a47f1526a7d749dcf2feab72ec813bb5d37f1" exitCode=0 Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.238402 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" event={"ID":"fe76f66e-59c6-426c-8fa0-b3003bf0b6da","Type":"ContainerDied","Data":"27b1e01150ce0ee638b504ea460a47f1526a7d749dcf2feab72ec813bb5d37f1"} Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.238447 4713 scope.go:117] "RemoveContainer" containerID="27b1e01150ce0ee638b504ea460a47f1526a7d749dcf2feab72ec813bb5d37f1" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.249303 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" event={"ID":"168827e6-0812-4edd-8632-01b2c82937b7","Type":"ContainerStarted","Data":"e911049cac9a492a827409ab4691426768d74b5b4817272749d27b9bd73d8565"} Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.252561 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="18790311-f08f-4785-b9df-ba3baf764010" containerName="cinder-scheduler" containerID="cri-o://1c5186870d30250c65e100b679e0f494333c92e71175c3f256ed15fc0495874a" gracePeriod=30 Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.252863 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"ac7af3ca-8197-48cd-8480-a0c5292c9fa6","Type":"ContainerStarted","Data":"c6f20cfc0d9a296e819bceb5bad1d4cd0003e4ada4504aebe843b244d85d2bfc"} Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.252932 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="18790311-f08f-4785-b9df-ba3baf764010" containerName="probe" containerID="cri-o://3f081b8a503dd3110100dec2e71367de73f05fe4b7b6884bc22205f384c66496" gracePeriod=30 Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.267437 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-ovsdbserver-sb\") pod \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.267492 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-dns-swift-storage-0\") pod \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.267601 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-config\") pod \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.267640 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-ovsdbserver-nb\") pod \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.267676 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45jgs\" (UniqueName: \"kubernetes.io/projected/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-kube-api-access-45jgs\") pod \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.267763 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-dns-svc\") pod \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.300533 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-kube-api-access-45jgs" (OuterVolumeSpecName: "kube-api-access-45jgs") pod "fe76f66e-59c6-426c-8fa0-b3003bf0b6da" (UID: "fe76f66e-59c6-426c-8fa0-b3003bf0b6da"). InnerVolumeSpecName "kube-api-access-45jgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.354936 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe76f66e-59c6-426c-8fa0-b3003bf0b6da" (UID: "fe76f66e-59c6-426c-8fa0-b3003bf0b6da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.370040 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45jgs\" (UniqueName: \"kubernetes.io/projected/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-kube-api-access-45jgs\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.370086 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.479762 4713 scope.go:117] "RemoveContainer" containerID="f288a5673d4a4a3c2445160ec073ffc7736d3ba62afce0e4116ce9f17c7bd66b" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.553252 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:57:02 crc kubenswrapper[4713]: E0126 15:57:02.554537 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe76f66e-59c6-426c-8fa0-b3003bf0b6da" containerName="init" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.554555 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe76f66e-59c6-426c-8fa0-b3003bf0b6da" containerName="init" Jan 26 15:57:02 crc kubenswrapper[4713]: E0126 15:57:02.554585 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe76f66e-59c6-426c-8fa0-b3003bf0b6da" containerName="dnsmasq-dns" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.554591 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe76f66e-59c6-426c-8fa0-b3003bf0b6da" containerName="dnsmasq-dns" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.555008 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe76f66e-59c6-426c-8fa0-b3003bf0b6da" containerName="dnsmasq-dns" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.558438 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.565872 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.566277 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7s45m" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.569642 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fe76f66e-59c6-426c-8fa0-b3003bf0b6da" (UID: "fe76f66e-59c6-426c-8fa0-b3003bf0b6da"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.575677 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.576753 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fe76f66e-59c6-426c-8fa0-b3003bf0b6da" (UID: "fe76f66e-59c6-426c-8fa0-b3003bf0b6da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.587995 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.600134 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fe76f66e-59c6-426c-8fa0-b3003bf0b6da" (UID: "fe76f66e-59c6-426c-8fa0-b3003bf0b6da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.600445 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-ovsdbserver-sb\") pod \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\" (UID: \"fe76f66e-59c6-426c-8fa0-b3003bf0b6da\") " Jan 26 15:57:02 crc kubenswrapper[4713]: W0126 15:57:02.600579 4713 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fe76f66e-59c6-426c-8fa0-b3003bf0b6da/volumes/kubernetes.io~configmap/ovsdbserver-sb Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.600616 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fe76f66e-59c6-426c-8fa0-b3003bf0b6da" (UID: "fe76f66e-59c6-426c-8fa0-b3003bf0b6da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.600456 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.644558 4713 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.644608 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.644618 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.755073 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-scripts\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.788073 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-logs\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.788150 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.788234 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.788259 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmx97\" (UniqueName: \"kubernetes.io/projected/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-kube-api-access-cmx97\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.788281 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-config-data\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.788403 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.821021 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.842531 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.850824 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.852265 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.891138 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmx97\" (UniqueName: \"kubernetes.io/projected/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-kube-api-access-cmx97\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.891536 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-config-data\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.891680 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.892064 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-scripts\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.892705 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-logs\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.893101 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.893282 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.894446 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.898206 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-config" (OuterVolumeSpecName: "config") pod "fe76f66e-59c6-426c-8fa0-b3003bf0b6da" (UID: "fe76f66e-59c6-426c-8fa0-b3003bf0b6da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.905832 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.905881 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/00dc91cd9df64dbe706f69ffd599c2ae7292b0cc0cf466faa03c0fe7216c3630/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.913792 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-logs\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.940088 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-config-data\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.944555 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-2j86n"] Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.953903 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.959893 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.960087 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmx97\" (UniqueName: \"kubernetes.io/projected/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-kube-api-access-cmx97\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.960543 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-scripts\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.996024 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.996095 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.996121 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd61364c-3f9e-49b2-8ffb-d2315e83f969-logs\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.996273 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.996411 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd61364c-3f9e-49b2-8ffb-d2315e83f969-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.996472 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2pxs\" (UniqueName: \"kubernetes.io/projected/cd61364c-3f9e-49b2-8ffb-d2315e83f969-kube-api-access-n2pxs\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.996618 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:02 crc kubenswrapper[4713]: I0126 15:57:02.996796 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe76f66e-59c6-426c-8fa0-b3003bf0b6da-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.097740 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-logs\") pod \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.097913 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-config-data\") pod \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.097951 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-combined-ca-bundle\") pod \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.097981 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx6s2\" (UniqueName: \"kubernetes.io/projected/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-kube-api-access-cx6s2\") pod \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.098021 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-config-data-custom\") pod \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\" (UID: \"c8d7d58d-d78f-4623-a9dc-5c4fd0077607\") " Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.098217 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-logs" (OuterVolumeSpecName: "logs") pod "c8d7d58d-d78f-4623-a9dc-5c4fd0077607" (UID: "c8d7d58d-d78f-4623-a9dc-5c4fd0077607"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.098256 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd61364c-3f9e-49b2-8ffb-d2315e83f969-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.098290 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2pxs\" (UniqueName: \"kubernetes.io/projected/cd61364c-3f9e-49b2-8ffb-d2315e83f969-kube-api-access-n2pxs\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.098387 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.098474 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.098504 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.098524 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd61364c-3f9e-49b2-8ffb-d2315e83f969-logs\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.098555 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.098623 4713 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.100169 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd61364c-3f9e-49b2-8ffb-d2315e83f969-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.100417 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd61364c-3f9e-49b2-8ffb-d2315e83f969-logs\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.108085 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.108136 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e4dc69d14e8e0f9ca8a772d269bb39f8a91314d344a69118d1458dfeb18a9550/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.115320 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.115966 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.137735 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c8d7d58d-d78f-4623-a9dc-5c4fd0077607" (UID: "c8d7d58d-d78f-4623-a9dc-5c4fd0077607"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.138444 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.142559 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2pxs\" (UniqueName: \"kubernetes.io/projected/cd61364c-3f9e-49b2-8ffb-d2315e83f969-kube-api-access-n2pxs\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.151097 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-kube-api-access-cx6s2" (OuterVolumeSpecName: "kube-api-access-cx6s2") pod "c8d7d58d-d78f-4623-a9dc-5c4fd0077607" (UID: "c8d7d58d-d78f-4623-a9dc-5c4fd0077607"). InnerVolumeSpecName "kube-api-access-cx6s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.202947 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx6s2\" (UniqueName: \"kubernetes.io/projected/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-kube-api-access-cx6s2\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.202974 4713 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.265683 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-2j86n" event={"ID":"b1d0ef70-9f37-4d0c-b317-7100a193699e","Type":"ContainerStarted","Data":"d065e628c087e60c199e96844ee4627d8eb2d7fa10161b3b57c0fdea71f3156b"} Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.267871 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2","Type":"ContainerStarted","Data":"cf230e77093a361ad02790fac2bf4da58219f2886d48c4655491bce233d4386e"} Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.278423 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5965d4d6c4-8lvw4" event={"ID":"c8d7d58d-d78f-4623-a9dc-5c4fd0077607","Type":"ContainerDied","Data":"bb7749ba0966c25105b86b8ad95955d55a4ba789db67193d8cba406d44c1a626"} Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.278494 4713 scope.go:117] "RemoveContainer" containerID="019f0e226ccb91844092267cf655de8349e2d0522fffcc24fe214b808a219965" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.278649 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5965d4d6c4-8lvw4" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.293488 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.294475 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c68459c4c-j9whv" event={"ID":"fe76f66e-59c6-426c-8fa0-b3003bf0b6da","Type":"ContainerDied","Data":"57393f8b3da039d0b4525d23bb3f0146357114e5ce1561a130f0e0be1b5f1162"} Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.301071 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.301605 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.307713 4713 generic.go:334] "Generic (PLEG): container finished" podID="168827e6-0812-4edd-8632-01b2c82937b7" containerID="7aae906c2e87d8d6bb7d2b0f8fb1caa17e277d32efe8e13dc87e401cb9706f86" exitCode=0 Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.307821 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" event={"ID":"168827e6-0812-4edd-8632-01b2c82937b7","Type":"ContainerDied","Data":"7aae906c2e87d8d6bb7d2b0f8fb1caa17e277d32efe8e13dc87e401cb9706f86"} Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.353258 4713 scope.go:117] "RemoveContainer" containerID="b5a54cc25f0cc72f8941b209af70a1a8b1cc73d063c296b799c7ed5b3c9ce557" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.511576 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8d7d58d-d78f-4623-a9dc-5c4fd0077607" (UID: "c8d7d58d-d78f-4623-a9dc-5c4fd0077607"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.516270 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.676351 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") pod \"glance-default-external-api-0\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.891501 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 15:57:03 crc kubenswrapper[4713]: I0126 15:57:03.926700 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") pod \"glance-default-internal-api-0\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.009075 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-config-data" (OuterVolumeSpecName: "config-data") pod "c8d7d58d-d78f-4623-a9dc-5c4fd0077607" (UID: "c8d7d58d-d78f-4623-a9dc-5c4fd0077607"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.042759 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d7d58d-d78f-4623-a9dc-5c4fd0077607-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.104225 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.207610 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59866b8478-b6cbm" podUID="a611ae0d-da10-46d8-8520-0a3dd75e1d1c" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.174:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.306738 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.340959 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.359395 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c68459c4c-j9whv"] Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.375732 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" event={"ID":"168827e6-0812-4edd-8632-01b2c82937b7","Type":"ContainerDied","Data":"e911049cac9a492a827409ab4691426768d74b5b4817272749d27b9bd73d8565"} Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.375785 4713 scope.go:117] "RemoveContainer" containerID="7aae906c2e87d8d6bb7d2b0f8fb1caa17e277d32efe8e13dc87e401cb9706f86" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.375884 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86b78b4d8c-7v2hw" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.389794 4713 generic.go:334] "Generic (PLEG): container finished" podID="18790311-f08f-4785-b9df-ba3baf764010" containerID="3f081b8a503dd3110100dec2e71367de73f05fe4b7b6884bc22205f384c66496" exitCode=0 Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.389857 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"18790311-f08f-4785-b9df-ba3baf764010","Type":"ContainerDied","Data":"3f081b8a503dd3110100dec2e71367de73f05fe4b7b6884bc22205f384c66496"} Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.397667 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c68459c4c-j9whv"] Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.425069 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3a992c5f-9e04-4776-8603-5c9b4def66c7","Type":"ContainerStarted","Data":"edc13641926947ed38c4d11b6a963611ed3ca25e3a141d5d33881f6244dadb17"} Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.426047 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.486531 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5965d4d6c4-8lvw4"] Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.491578 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpv4z\" (UniqueName: \"kubernetes.io/projected/168827e6-0812-4edd-8632-01b2c82937b7-kube-api-access-vpv4z\") pod \"168827e6-0812-4edd-8632-01b2c82937b7\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.491967 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-dns-svc\") pod \"168827e6-0812-4edd-8632-01b2c82937b7\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.492008 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-config\") pod \"168827e6-0812-4edd-8632-01b2c82937b7\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.492625 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-ovsdbserver-nb\") pod \"168827e6-0812-4edd-8632-01b2c82937b7\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.492895 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-dns-swift-storage-0\") pod \"168827e6-0812-4edd-8632-01b2c82937b7\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.493322 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-ovsdbserver-sb\") pod \"168827e6-0812-4edd-8632-01b2c82937b7\" (UID: \"168827e6-0812-4edd-8632-01b2c82937b7\") " Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.529508 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/168827e6-0812-4edd-8632-01b2c82937b7-kube-api-access-vpv4z" (OuterVolumeSpecName: "kube-api-access-vpv4z") pod "168827e6-0812-4edd-8632-01b2c82937b7" (UID: "168827e6-0812-4edd-8632-01b2c82937b7"). InnerVolumeSpecName "kube-api-access-vpv4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.608915 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.608893549 podStartE2EDuration="7.608893549s" podCreationTimestamp="2026-01-26 15:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:04.498393825 +0000 UTC m=+1399.635411060" watchObservedRunningTime="2026-01-26 15:57:04.608893549 +0000 UTC m=+1399.745910784" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.610478 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5965d4d6c4-8lvw4"] Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.615130 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "168827e6-0812-4edd-8632-01b2c82937b7" (UID: "168827e6-0812-4edd-8632-01b2c82937b7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.617519 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "168827e6-0812-4edd-8632-01b2c82937b7" (UID: "168827e6-0812-4edd-8632-01b2c82937b7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.619614 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "168827e6-0812-4edd-8632-01b2c82937b7" (UID: "168827e6-0812-4edd-8632-01b2c82937b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.635410 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.635452 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpv4z\" (UniqueName: \"kubernetes.io/projected/168827e6-0812-4edd-8632-01b2c82937b7-kube-api-access-vpv4z\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.659406 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "168827e6-0812-4edd-8632-01b2c82937b7" (UID: "168827e6-0812-4edd-8632-01b2c82937b7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.661216 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-config" (OuterVolumeSpecName: "config") pod "168827e6-0812-4edd-8632-01b2c82937b7" (UID: "168827e6-0812-4edd-8632-01b2c82937b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.736892 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.736918 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.736928 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.736939 4713 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/168827e6-0812-4edd-8632-01b2c82937b7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.796190 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86b78b4d8c-7v2hw"] Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.817985 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86b78b4d8c-7v2hw"] Jan 26 15:57:04 crc kubenswrapper[4713]: I0126 15:57:04.991840 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:57:05 crc kubenswrapper[4713]: W0126 15:57:05.012792 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94ce117c_b58d_4e84_b8f9_9acfb6bfcdd5.slice/crio-183b31c3c77b3df4107e317c885ec51d5874ffd07a605b7df89fdcca1a4dca1f WatchSource:0}: Error finding container 183b31c3c77b3df4107e317c885ec51d5874ffd07a605b7df89fdcca1a4dca1f: Status 404 returned error can't find the container with id 183b31c3c77b3df4107e317c885ec51d5874ffd07a605b7df89fdcca1a4dca1f Jan 26 15:57:05 crc kubenswrapper[4713]: I0126 15:57:05.155771 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:57:05 crc kubenswrapper[4713]: I0126 15:57:05.436416 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5","Type":"ContainerStarted","Data":"183b31c3c77b3df4107e317c885ec51d5874ffd07a605b7df89fdcca1a4dca1f"} Jan 26 15:57:05 crc kubenswrapper[4713]: I0126 15:57:05.438933 4713 generic.go:334] "Generic (PLEG): container finished" podID="b1d0ef70-9f37-4d0c-b317-7100a193699e" containerID="5b81ac5be36301edc769d18e08a480401fa7f280e2324ee24d364602dd3e9088" exitCode=0 Jan 26 15:57:05 crc kubenswrapper[4713]: I0126 15:57:05.439010 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-2j86n" event={"ID":"b1d0ef70-9f37-4d0c-b317-7100a193699e","Type":"ContainerDied","Data":"5b81ac5be36301edc769d18e08a480401fa7f280e2324ee24d364602dd3e9088"} Jan 26 15:57:05 crc kubenswrapper[4713]: I0126 15:57:05.443163 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2","Type":"ContainerStarted","Data":"7c930878284e055cb18e895a81de72fa3a3e28db807f621e9841129b8b204561"} Jan 26 15:57:05 crc kubenswrapper[4713]: I0126 15:57:05.443204 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2","Type":"ContainerStarted","Data":"17c989c4a47c7806a9ea9dba3c0d2bf1c32390f084f841f3d50c0129b0207d35"} Jan 26 15:57:05 crc kubenswrapper[4713]: I0126 15:57:05.443313 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" containerName="cloudkitty-api-log" containerID="cri-o://17c989c4a47c7806a9ea9dba3c0d2bf1c32390f084f841f3d50c0129b0207d35" gracePeriod=30 Jan 26 15:57:05 crc kubenswrapper[4713]: I0126 15:57:05.443570 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Jan 26 15:57:05 crc kubenswrapper[4713]: I0126 15:57:05.443608 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" containerName="cloudkitty-api" containerID="cri-o://7c930878284e055cb18e895a81de72fa3a3e28db807f621e9841129b8b204561" gracePeriod=30 Jan 26 15:57:05 crc kubenswrapper[4713]: I0126 15:57:05.461463 4713 generic.go:334] "Generic (PLEG): container finished" podID="18790311-f08f-4785-b9df-ba3baf764010" containerID="1c5186870d30250c65e100b679e0f494333c92e71175c3f256ed15fc0495874a" exitCode=0 Jan 26 15:57:05 crc kubenswrapper[4713]: I0126 15:57:05.462027 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"18790311-f08f-4785-b9df-ba3baf764010","Type":"ContainerDied","Data":"1c5186870d30250c65e100b679e0f494333c92e71175c3f256ed15fc0495874a"} Jan 26 15:57:05 crc kubenswrapper[4713]: I0126 15:57:05.501867 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=5.501840244 podStartE2EDuration="5.501840244s" podCreationTimestamp="2026-01-26 15:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:05.479921618 +0000 UTC m=+1400.616938853" watchObservedRunningTime="2026-01-26 15:57:05.501840244 +0000 UTC m=+1400.638857479" Jan 26 15:57:05 crc kubenswrapper[4713]: I0126 15:57:05.822204 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="168827e6-0812-4edd-8632-01b2c82937b7" path="/var/lib/kubelet/pods/168827e6-0812-4edd-8632-01b2c82937b7/volumes" Jan 26 15:57:05 crc kubenswrapper[4713]: I0126 15:57:05.823297 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8d7d58d-d78f-4623-a9dc-5c4fd0077607" path="/var/lib/kubelet/pods/c8d7d58d-d78f-4623-a9dc-5c4fd0077607/volumes" Jan 26 15:57:05 crc kubenswrapper[4713]: I0126 15:57:05.824002 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe76f66e-59c6-426c-8fa0-b3003bf0b6da" path="/var/lib/kubelet/pods/fe76f66e-59c6-426c-8fa0-b3003bf0b6da/volumes" Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.368280 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.493985 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l5f8\" (UniqueName: \"kubernetes.io/projected/18790311-f08f-4785-b9df-ba3baf764010-kube-api-access-4l5f8\") pod \"18790311-f08f-4785-b9df-ba3baf764010\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.494051 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-config-data\") pod \"18790311-f08f-4785-b9df-ba3baf764010\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.494094 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-scripts\") pod \"18790311-f08f-4785-b9df-ba3baf764010\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.494155 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-combined-ca-bundle\") pod \"18790311-f08f-4785-b9df-ba3baf764010\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.494261 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-config-data-custom\") pod \"18790311-f08f-4785-b9df-ba3baf764010\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.494325 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18790311-f08f-4785-b9df-ba3baf764010-etc-machine-id\") pod \"18790311-f08f-4785-b9df-ba3baf764010\" (UID: \"18790311-f08f-4785-b9df-ba3baf764010\") " Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.494897 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18790311-f08f-4785-b9df-ba3baf764010-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "18790311-f08f-4785-b9df-ba3baf764010" (UID: "18790311-f08f-4785-b9df-ba3baf764010"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.539629 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18790311-f08f-4785-b9df-ba3baf764010-kube-api-access-4l5f8" (OuterVolumeSpecName: "kube-api-access-4l5f8") pod "18790311-f08f-4785-b9df-ba3baf764010" (UID: "18790311-f08f-4785-b9df-ba3baf764010"). InnerVolumeSpecName "kube-api-access-4l5f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.539744 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "18790311-f08f-4785-b9df-ba3baf764010" (UID: "18790311-f08f-4785-b9df-ba3baf764010"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.541136 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-scripts" (OuterVolumeSpecName: "scripts") pod "18790311-f08f-4785-b9df-ba3baf764010" (UID: "18790311-f08f-4785-b9df-ba3baf764010"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.575244 4713 generic.go:334] "Generic (PLEG): container finished" podID="e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" containerID="17c989c4a47c7806a9ea9dba3c0d2bf1c32390f084f841f3d50c0129b0207d35" exitCode=143 Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.575326 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2","Type":"ContainerDied","Data":"17c989c4a47c7806a9ea9dba3c0d2bf1c32390f084f841f3d50c0129b0207d35"} Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.577780 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cd61364c-3f9e-49b2-8ffb-d2315e83f969","Type":"ContainerStarted","Data":"8531b2cac14826cde91e43f69255e6859930fac78d1e2c235859fd603fcb8569"} Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.605597 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"18790311-f08f-4785-b9df-ba3baf764010","Type":"ContainerDied","Data":"6862901dc413c6499f1f8abad46596cbc005df44df7f790bc9fe572e0b4dad5b"} Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.605911 4713 scope.go:117] "RemoveContainer" containerID="3f081b8a503dd3110100dec2e71367de73f05fe4b7b6884bc22205f384c66496" Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.606139 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.614381 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l5f8\" (UniqueName: \"kubernetes.io/projected/18790311-f08f-4785-b9df-ba3baf764010-kube-api-access-4l5f8\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.614421 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.614432 4713 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.614442 4713 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18790311-f08f-4785-b9df-ba3baf764010-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.730191 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5","Type":"ContainerStarted","Data":"43ce0a938be8d119da7483c7de234dd4c518b144f0302ddf7cb0b7366becb771"} Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.737604 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18790311-f08f-4785-b9df-ba3baf764010" (UID: "18790311-f08f-4785-b9df-ba3baf764010"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.825211 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.852883 4713 scope.go:117] "RemoveContainer" containerID="1c5186870d30250c65e100b679e0f494333c92e71175c3f256ed15fc0495874a" Jan 26 15:57:06 crc kubenswrapper[4713]: I0126 15:57:06.931352 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-config-data" (OuterVolumeSpecName: "config-data") pod "18790311-f08f-4785-b9df-ba3baf764010" (UID: "18790311-f08f-4785-b9df-ba3baf764010"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.029600 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18790311-f08f-4785-b9df-ba3baf764010-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.224393 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.260875 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.284212 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.328456 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:57:07 crc kubenswrapper[4713]: E0126 15:57:07.328978 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d7d58d-d78f-4623-a9dc-5c4fd0077607" containerName="barbican-api-log" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.328998 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d7d58d-d78f-4623-a9dc-5c4fd0077607" containerName="barbican-api-log" Jan 26 15:57:07 crc kubenswrapper[4713]: E0126 15:57:07.329022 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18790311-f08f-4785-b9df-ba3baf764010" containerName="cinder-scheduler" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.329030 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="18790311-f08f-4785-b9df-ba3baf764010" containerName="cinder-scheduler" Jan 26 15:57:07 crc kubenswrapper[4713]: E0126 15:57:07.329047 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18790311-f08f-4785-b9df-ba3baf764010" containerName="probe" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.329055 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="18790311-f08f-4785-b9df-ba3baf764010" containerName="probe" Jan 26 15:57:07 crc kubenswrapper[4713]: E0126 15:57:07.329080 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168827e6-0812-4edd-8632-01b2c82937b7" containerName="init" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.329086 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="168827e6-0812-4edd-8632-01b2c82937b7" containerName="init" Jan 26 15:57:07 crc kubenswrapper[4713]: E0126 15:57:07.329111 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d7d58d-d78f-4623-a9dc-5c4fd0077607" containerName="barbican-api" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.329118 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d7d58d-d78f-4623-a9dc-5c4fd0077607" containerName="barbican-api" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.329387 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="168827e6-0812-4edd-8632-01b2c82937b7" containerName="init" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.329413 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d7d58d-d78f-4623-a9dc-5c4fd0077607" containerName="barbican-api-log" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.329426 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="18790311-f08f-4785-b9df-ba3baf764010" containerName="cinder-scheduler" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.329442 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="18790311-f08f-4785-b9df-ba3baf764010" containerName="probe" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.329454 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d7d58d-d78f-4623-a9dc-5c4fd0077607" containerName="barbican-api" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.330897 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.337880 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.345954 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.363567 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.442813 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.443125 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-config-data\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.443144 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.443183 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-scripts\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.443213 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkctf\" (UniqueName: \"kubernetes.io/projected/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-kube-api-access-vkctf\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.443250 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.545013 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-config-data\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.545058 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.545096 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-scripts\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.545127 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkctf\" (UniqueName: \"kubernetes.io/projected/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-kube-api-access-vkctf\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.545160 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.545246 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.545766 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.552173 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-scripts\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.554478 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.556104 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.556172 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-config-data\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.561577 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkctf\" (UniqueName: \"kubernetes.io/projected/59c0b6f8-caab-480e-8fd6-7e7e896efaaa-kube-api-access-vkctf\") pod \"cinder-scheduler-0\" (UID: \"59c0b6f8-caab-480e-8fd6-7e7e896efaaa\") " pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.672768 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.779103 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-2j86n" event={"ID":"b1d0ef70-9f37-4d0c-b317-7100a193699e","Type":"ContainerStarted","Data":"0b281f1f1cb6be1832f2d972cb83d46d8219295aea07a7ec1a550c00009f5b17"} Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.780445 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.820585 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67bdc55879-2j86n" podStartSLOduration=6.820559613 podStartE2EDuration="6.820559613s" podCreationTimestamp="2026-01-26 15:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:07.802941429 +0000 UTC m=+1402.939958664" watchObservedRunningTime="2026-01-26 15:57:07.820559613 +0000 UTC m=+1402.957576848" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.842804 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18790311-f08f-4785-b9df-ba3baf764010" path="/var/lib/kubelet/pods/18790311-f08f-4785-b9df-ba3baf764010/volumes" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.843921 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cd61364c-3f9e-49b2-8ffb-d2315e83f969","Type":"ContainerStarted","Data":"fbaf2ef79cbe8540a1d81a3b52dc1680c0d605c61f74fa1f93b1c10d00682807"} Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.843960 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"ac7af3ca-8197-48cd-8480-a0c5292c9fa6","Type":"ContainerStarted","Data":"c0c52f7042da6f8e751abae1da29cad7eaa249c53338867e8b994a88edfdf4ff"} Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.862482 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5","Type":"ContainerStarted","Data":"a17d8e9b4cafb92842eae1143019bb42d5e05db698eb9f3b39a2cdfa67aeb9e5"} Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.862672 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" containerName="glance-log" containerID="cri-o://43ce0a938be8d119da7483c7de234dd4c518b144f0302ddf7cb0b7366becb771" gracePeriod=30 Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.862791 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" containerName="glance-httpd" containerID="cri-o://a17d8e9b4cafb92842eae1143019bb42d5e05db698eb9f3b39a2cdfa67aeb9e5" gracePeriod=30 Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.878874 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=3.258921425 podStartE2EDuration="7.878846501s" podCreationTimestamp="2026-01-26 15:57:00 +0000 UTC" firstStartedPulling="2026-01-26 15:57:01.624110668 +0000 UTC m=+1396.761127903" lastFinishedPulling="2026-01-26 15:57:06.244035744 +0000 UTC m=+1401.381052979" observedRunningTime="2026-01-26 15:57:07.852527661 +0000 UTC m=+1402.989544926" watchObservedRunningTime="2026-01-26 15:57:07.878846501 +0000 UTC m=+1403.015863746" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.899916 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.909554 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.909526563 podStartE2EDuration="6.909526563s" podCreationTimestamp="2026-01-26 15:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:07.886894577 +0000 UTC m=+1403.023911822" watchObservedRunningTime="2026-01-26 15:57:07.909526563 +0000 UTC m=+1403.046543798" Jan 26 15:57:07 crc kubenswrapper[4713]: I0126 15:57:07.967535 4713 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod23a6554a-422b-4fb1-a6c6-e99368e2b129"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod23a6554a-422b-4fb1-a6c6-e99368e2b129] : Timed out while waiting for systemd to remove kubepods-besteffort-pod23a6554a_422b_4fb1_a6c6_e99368e2b129.slice" Jan 26 15:57:08 crc kubenswrapper[4713]: I0126 15:57:08.403264 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:57:08 crc kubenswrapper[4713]: W0126 15:57:08.423498 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59c0b6f8_caab_480e_8fd6_7e7e896efaaa.slice/crio-1917505e6401453c2eb6d35653f36dc1027a3f7375dfd9ecdbd72520fa63120c WatchSource:0}: Error finding container 1917505e6401453c2eb6d35653f36dc1027a3f7375dfd9ecdbd72520fa63120c: Status 404 returned error can't find the container with id 1917505e6401453c2eb6d35653f36dc1027a3f7375dfd9ecdbd72520fa63120c Jan 26 15:57:08 crc kubenswrapper[4713]: I0126 15:57:08.880982 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cd61364c-3f9e-49b2-8ffb-d2315e83f969","Type":"ContainerStarted","Data":"02ad9e35c726e401b617fe5377ee03a6b99446d11d4ba1154a797c286c8cc90a"} Jan 26 15:57:08 crc kubenswrapper[4713]: I0126 15:57:08.881484 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cd61364c-3f9e-49b2-8ffb-d2315e83f969" containerName="glance-log" containerID="cri-o://fbaf2ef79cbe8540a1d81a3b52dc1680c0d605c61f74fa1f93b1c10d00682807" gracePeriod=30 Jan 26 15:57:08 crc kubenswrapper[4713]: I0126 15:57:08.883623 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cd61364c-3f9e-49b2-8ffb-d2315e83f969" containerName="glance-httpd" containerID="cri-o://02ad9e35c726e401b617fe5377ee03a6b99446d11d4ba1154a797c286c8cc90a" gracePeriod=30 Jan 26 15:57:08 crc kubenswrapper[4713]: I0126 15:57:08.924728 4713 generic.go:334] "Generic (PLEG): container finished" podID="94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" containerID="a17d8e9b4cafb92842eae1143019bb42d5e05db698eb9f3b39a2cdfa67aeb9e5" exitCode=0 Jan 26 15:57:08 crc kubenswrapper[4713]: I0126 15:57:08.925077 4713 generic.go:334] "Generic (PLEG): container finished" podID="94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" containerID="43ce0a938be8d119da7483c7de234dd4c518b144f0302ddf7cb0b7366becb771" exitCode=143 Jan 26 15:57:08 crc kubenswrapper[4713]: I0126 15:57:08.925162 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5","Type":"ContainerDied","Data":"a17d8e9b4cafb92842eae1143019bb42d5e05db698eb9f3b39a2cdfa67aeb9e5"} Jan 26 15:57:08 crc kubenswrapper[4713]: I0126 15:57:08.925193 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5","Type":"ContainerDied","Data":"43ce0a938be8d119da7483c7de234dd4c518b144f0302ddf7cb0b7366becb771"} Jan 26 15:57:08 crc kubenswrapper[4713]: I0126 15:57:08.931394 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.931370428 podStartE2EDuration="7.931370428s" podCreationTimestamp="2026-01-26 15:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:08.902971611 +0000 UTC m=+1404.039988846" watchObservedRunningTime="2026-01-26 15:57:08.931370428 +0000 UTC m=+1404.068387663" Jan 26 15:57:08 crc kubenswrapper[4713]: I0126 15:57:08.944290 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"59c0b6f8-caab-480e-8fd6-7e7e896efaaa","Type":"ContainerStarted","Data":"1917505e6401453c2eb6d35653f36dc1027a3f7375dfd9ecdbd72520fa63120c"} Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.076963 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.229219 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-httpd-run\") pod \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.229468 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-scripts\") pod \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.229532 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmx97\" (UniqueName: \"kubernetes.io/projected/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-kube-api-access-cmx97\") pod \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.229766 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") pod \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.229815 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-combined-ca-bundle\") pod \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.229841 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-logs\") pod \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.229944 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-config-data\") pod \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\" (UID: \"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5\") " Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.230247 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" (UID: "94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.230512 4713 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.232565 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-logs" (OuterVolumeSpecName: "logs") pod "94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" (UID: "94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.244574 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-scripts" (OuterVolumeSpecName: "scripts") pod "94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" (UID: "94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.246229 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-kube-api-access-cmx97" (OuterVolumeSpecName: "kube-api-access-cmx97") pod "94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" (UID: "94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5"). InnerVolumeSpecName "kube-api-access-cmx97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.286765 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913" (OuterVolumeSpecName: "glance") pod "94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" (UID: "94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5"). InnerVolumeSpecName "pvc-ee75bc78-62c3-4a56-b6d6-deef53255913". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.312586 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" (UID: "94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.335899 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.335926 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmx97\" (UniqueName: \"kubernetes.io/projected/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-kube-api-access-cmx97\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.335952 4713 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") on node \"crc\" " Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.335963 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.335973 4713 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.421787 4713 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.422140 4713 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ee75bc78-62c3-4a56-b6d6-deef53255913" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913") on node "crc" Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.439868 4713 reconciler_common.go:293] "Volume detached for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.461954 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-config-data" (OuterVolumeSpecName: "config-data") pod "94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" (UID: "94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:09 crc kubenswrapper[4713]: I0126 15:57:09.541853 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.013755 4713 generic.go:334] "Generic (PLEG): container finished" podID="cd61364c-3f9e-49b2-8ffb-d2315e83f969" containerID="02ad9e35c726e401b617fe5377ee03a6b99446d11d4ba1154a797c286c8cc90a" exitCode=0 Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.013929 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cd61364c-3f9e-49b2-8ffb-d2315e83f969","Type":"ContainerDied","Data":"02ad9e35c726e401b617fe5377ee03a6b99446d11d4ba1154a797c286c8cc90a"} Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.014016 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cd61364c-3f9e-49b2-8ffb-d2315e83f969","Type":"ContainerDied","Data":"fbaf2ef79cbe8540a1d81a3b52dc1680c0d605c61f74fa1f93b1c10d00682807"} Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.013976 4713 generic.go:334] "Generic (PLEG): container finished" podID="cd61364c-3f9e-49b2-8ffb-d2315e83f969" containerID="fbaf2ef79cbe8540a1d81a3b52dc1680c0d605c61f74fa1f93b1c10d00682807" exitCode=143 Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.028905 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="ac7af3ca-8197-48cd-8480-a0c5292c9fa6" containerName="cloudkitty-proc" containerID="cri-o://c0c52f7042da6f8e751abae1da29cad7eaa249c53338867e8b994a88edfdf4ff" gracePeriod=30 Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.029011 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.029783 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5","Type":"ContainerDied","Data":"183b31c3c77b3df4107e317c885ec51d5874ffd07a605b7df89fdcca1a4dca1f"} Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.029815 4713 scope.go:117] "RemoveContainer" containerID="a17d8e9b4cafb92842eae1143019bb42d5e05db698eb9f3b39a2cdfa67aeb9e5" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.165644 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.182741 4713 scope.go:117] "RemoveContainer" containerID="43ce0a938be8d119da7483c7de234dd4c518b144f0302ddf7cb0b7366becb771" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.219205 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.233588 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.250101 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:57:10 crc kubenswrapper[4713]: E0126 15:57:10.250581 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" containerName="glance-httpd" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.250596 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" containerName="glance-httpd" Jan 26 15:57:10 crc kubenswrapper[4713]: E0126 15:57:10.250605 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd61364c-3f9e-49b2-8ffb-d2315e83f969" containerName="glance-httpd" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.250611 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd61364c-3f9e-49b2-8ffb-d2315e83f969" containerName="glance-httpd" Jan 26 15:57:10 crc kubenswrapper[4713]: E0126 15:57:10.250649 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd61364c-3f9e-49b2-8ffb-d2315e83f969" containerName="glance-log" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.250658 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd61364c-3f9e-49b2-8ffb-d2315e83f969" containerName="glance-log" Jan 26 15:57:10 crc kubenswrapper[4713]: E0126 15:57:10.250669 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" containerName="glance-log" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.250676 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" containerName="glance-log" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.250912 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" containerName="glance-httpd" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.250925 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd61364c-3f9e-49b2-8ffb-d2315e83f969" containerName="glance-log" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.250932 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd61364c-3f9e-49b2-8ffb-d2315e83f969" containerName="glance-httpd" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.250948 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" containerName="glance-log" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.252156 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.257721 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.257912 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.259700 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd61364c-3f9e-49b2-8ffb-d2315e83f969-logs\") pod \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.259738 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-config-data\") pod \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.259854 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") pod \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.259876 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-combined-ca-bundle\") pod \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.259914 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd61364c-3f9e-49b2-8ffb-d2315e83f969-httpd-run\") pod \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.259973 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2pxs\" (UniqueName: \"kubernetes.io/projected/cd61364c-3f9e-49b2-8ffb-d2315e83f969-kube-api-access-n2pxs\") pod \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.260008 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-scripts\") pod \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\" (UID: \"cd61364c-3f9e-49b2-8ffb-d2315e83f969\") " Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.271007 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd61364c-3f9e-49b2-8ffb-d2315e83f969-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cd61364c-3f9e-49b2-8ffb-d2315e83f969" (UID: "cd61364c-3f9e-49b2-8ffb-d2315e83f969"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.274330 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd61364c-3f9e-49b2-8ffb-d2315e83f969-logs" (OuterVolumeSpecName: "logs") pod "cd61364c-3f9e-49b2-8ffb-d2315e83f969" (UID: "cd61364c-3f9e-49b2-8ffb-d2315e83f969"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.274424 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-scripts" (OuterVolumeSpecName: "scripts") pod "cd61364c-3f9e-49b2-8ffb-d2315e83f969" (UID: "cd61364c-3f9e-49b2-8ffb-d2315e83f969"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.296021 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd61364c-3f9e-49b2-8ffb-d2315e83f969-kube-api-access-n2pxs" (OuterVolumeSpecName: "kube-api-access-n2pxs") pod "cd61364c-3f9e-49b2-8ffb-d2315e83f969" (UID: "cd61364c-3f9e-49b2-8ffb-d2315e83f969"). InnerVolumeSpecName "kube-api-access-n2pxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.321432 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.338282 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc" (OuterVolumeSpecName: "glance") pod "cd61364c-3f9e-49b2-8ffb-d2315e83f969" (UID: "cd61364c-3f9e-49b2-8ffb-d2315e83f969"). InnerVolumeSpecName "pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.364088 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.364567 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ccb1131b-e156-481e-a986-e6231bf9b82c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.364793 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.365068 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-config-data\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.365201 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-scripts\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.365521 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2lsj\" (UniqueName: \"kubernetes.io/projected/ccb1131b-e156-481e-a986-e6231bf9b82c-kube-api-access-n2lsj\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.365651 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb1131b-e156-481e-a986-e6231bf9b82c-logs\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.379711 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.380006 4713 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd61364c-3f9e-49b2-8ffb-d2315e83f969-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.380084 4713 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") on node \"crc\" " Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.380142 4713 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd61364c-3f9e-49b2-8ffb-d2315e83f969-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.380205 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2pxs\" (UniqueName: \"kubernetes.io/projected/cd61364c-3f9e-49b2-8ffb-d2315e83f969-kube-api-access-n2pxs\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.380260 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.391523 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd61364c-3f9e-49b2-8ffb-d2315e83f969" (UID: "cd61364c-3f9e-49b2-8ffb-d2315e83f969"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.463989 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-config-data" (OuterVolumeSpecName: "config-data") pod "cd61364c-3f9e-49b2-8ffb-d2315e83f969" (UID: "cd61364c-3f9e-49b2-8ffb-d2315e83f969"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.487165 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.487235 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-config-data\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.487263 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-scripts\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.487355 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2lsj\" (UniqueName: \"kubernetes.io/projected/ccb1131b-e156-481e-a986-e6231bf9b82c-kube-api-access-n2lsj\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.498783 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb1131b-e156-481e-a986-e6231bf9b82c-logs\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.498872 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.498951 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.499021 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ccb1131b-e156-481e-a986-e6231bf9b82c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.499187 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.499206 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd61364c-3f9e-49b2-8ffb-d2315e83f969-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.501933 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-config-data\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.502800 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb1131b-e156-481e-a986-e6231bf9b82c-logs\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.503132 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ccb1131b-e156-481e-a986-e6231bf9b82c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.506031 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.506488 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-scripts\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.510574 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.510620 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/00dc91cd9df64dbe706f69ffd599c2ae7292b0cc0cf466faa03c0fe7216c3630/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.511218 4713 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.511386 4713 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc") on node "crc" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.523107 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.543572 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2lsj\" (UniqueName: \"kubernetes.io/projected/ccb1131b-e156-481e-a986-e6231bf9b82c-kube-api-access-n2lsj\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.607225 4713 reconciler_common.go:293] "Volume detached for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.671986 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") pod \"glance-default-external-api-0\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:10 crc kubenswrapper[4713]: I0126 15:57:10.924872 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.071316 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"59c0b6f8-caab-480e-8fd6-7e7e896efaaa","Type":"ContainerStarted","Data":"ddf67e39eef59c7d7d199ff651daebdde701849b99c80dc62ebc5d5eb164063e"} Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.087903 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cd61364c-3f9e-49b2-8ffb-d2315e83f969","Type":"ContainerDied","Data":"8531b2cac14826cde91e43f69255e6859930fac78d1e2c235859fd603fcb8569"} Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.087955 4713 scope.go:117] "RemoveContainer" containerID="02ad9e35c726e401b617fe5377ee03a6b99446d11d4ba1154a797c286c8cc90a" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.087993 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.136943 4713 scope.go:117] "RemoveContainer" containerID="fbaf2ef79cbe8540a1d81a3b52dc1680c0d605c61f74fa1f93b1c10d00682807" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.159953 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.218636 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.264443 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.266715 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.284241 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.287759 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.329631 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.329677 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a94d3c-d963-4279-9c5b-89c52d701d33-logs\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.329712 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.329741 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.329801 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr2rf\" (UniqueName: \"kubernetes.io/projected/e2a94d3c-d963-4279-9c5b-89c52d701d33-kube-api-access-nr2rf\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.329881 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.329905 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.329930 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2a94d3c-d963-4279-9c5b-89c52d701d33-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.343492 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.440656 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.440711 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.440743 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2a94d3c-d963-4279-9c5b-89c52d701d33-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.440783 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.440804 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a94d3c-d963-4279-9c5b-89c52d701d33-logs\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.440834 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.440860 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.440912 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr2rf\" (UniqueName: \"kubernetes.io/projected/e2a94d3c-d963-4279-9c5b-89c52d701d33-kube-api-access-nr2rf\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.445860 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a94d3c-d963-4279-9c5b-89c52d701d33-logs\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.446137 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2a94d3c-d963-4279-9c5b-89c52d701d33-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.464454 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.464486 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.467687 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.468122 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.468191 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e4dc69d14e8e0f9ca8a772d269bb39f8a91314d344a69118d1458dfeb18a9550/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.475065 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.478726 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr2rf\" (UniqueName: \"kubernetes.io/projected/e2a94d3c-d963-4279-9c5b-89c52d701d33-kube-api-access-nr2rf\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.607796 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") pod \"glance-default-internal-api-0\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.623546 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.786239 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.827621 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5" path="/var/lib/kubelet/pods/94ce117c-b58d-4e84-b8f9-9acfb6bfcdd5/volumes" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.829210 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd61364c-3f9e-49b2-8ffb-d2315e83f969" path="/var/lib/kubelet/pods/cd61364c-3f9e-49b2-8ffb-d2315e83f969/volumes" Jan 26 15:57:11 crc kubenswrapper[4713]: I0126 15:57:11.943502 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:57:12 crc kubenswrapper[4713]: I0126 15:57:12.073274 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ff8449c8c-tjj5v"] Jan 26 15:57:12 crc kubenswrapper[4713]: I0126 15:57:12.073537 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" podUID="8dd08527-0793-4933-bcc1-780d121ece65" containerName="dnsmasq-dns" containerID="cri-o://8f4a846453e701c54c627d5f23226526867a6be178f1046e312fd831a9dbe401" gracePeriod=10 Jan 26 15:57:12 crc kubenswrapper[4713]: I0126 15:57:12.130298 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"59c0b6f8-caab-480e-8fd6-7e7e896efaaa","Type":"ContainerStarted","Data":"cfcdb23c4475d733e189beecefdb7c8afa3173fe18cbb6f592d5790dcae79da9"} Jan 26 15:57:12 crc kubenswrapper[4713]: I0126 15:57:12.154139 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ccb1131b-e156-481e-a986-e6231bf9b82c","Type":"ContainerStarted","Data":"6919bf2565ee558dadd5415f859ba912efdba54d56b7486fb7d07e0f86a7cdcc"} Jan 26 15:57:12 crc kubenswrapper[4713]: I0126 15:57:12.183426 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.183374936 podStartE2EDuration="5.183374936s" podCreationTimestamp="2026-01-26 15:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:12.169492656 +0000 UTC m=+1407.306509891" watchObservedRunningTime="2026-01-26 15:57:12.183374936 +0000 UTC m=+1407.320392171" Jan 26 15:57:12 crc kubenswrapper[4713]: I0126 15:57:12.675990 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 15:57:12 crc kubenswrapper[4713]: I0126 15:57:12.718762 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.042684 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.139601 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-ovsdbserver-sb\") pod \"8dd08527-0793-4933-bcc1-780d121ece65\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.139774 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-config\") pod \"8dd08527-0793-4933-bcc1-780d121ece65\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.139896 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqb8j\" (UniqueName: \"kubernetes.io/projected/8dd08527-0793-4933-bcc1-780d121ece65-kube-api-access-xqb8j\") pod \"8dd08527-0793-4933-bcc1-780d121ece65\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.139951 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-dns-swift-storage-0\") pod \"8dd08527-0793-4933-bcc1-780d121ece65\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.139970 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-ovsdbserver-nb\") pod \"8dd08527-0793-4933-bcc1-780d121ece65\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.140001 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-dns-svc\") pod \"8dd08527-0793-4933-bcc1-780d121ece65\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.179595 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dd08527-0793-4933-bcc1-780d121ece65-kube-api-access-xqb8j" (OuterVolumeSpecName: "kube-api-access-xqb8j") pod "8dd08527-0793-4933-bcc1-780d121ece65" (UID: "8dd08527-0793-4933-bcc1-780d121ece65"). InnerVolumeSpecName "kube-api-access-xqb8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.234764 4713 generic.go:334] "Generic (PLEG): container finished" podID="8dd08527-0793-4933-bcc1-780d121ece65" containerID="8f4a846453e701c54c627d5f23226526867a6be178f1046e312fd831a9dbe401" exitCode=0 Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.234877 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" event={"ID":"8dd08527-0793-4933-bcc1-780d121ece65","Type":"ContainerDied","Data":"8f4a846453e701c54c627d5f23226526867a6be178f1046e312fd831a9dbe401"} Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.234914 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" event={"ID":"8dd08527-0793-4933-bcc1-780d121ece65","Type":"ContainerDied","Data":"fc26845ecf377d3f51607eb82b00cfe3dd636b5db478f17513de6d75e80f0016"} Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.234937 4713 scope.go:117] "RemoveContainer" containerID="8f4a846453e701c54c627d5f23226526867a6be178f1046e312fd831a9dbe401" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.235138 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8449c8c-tjj5v" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.247519 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqb8j\" (UniqueName: \"kubernetes.io/projected/8dd08527-0793-4933-bcc1-780d121ece65-kube-api-access-xqb8j\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.263490 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e2a94d3c-d963-4279-9c5b-89c52d701d33","Type":"ContainerStarted","Data":"3783ef922fcba86d151b4c56fc4c3a470dadaae249a12d2c031f942c5b426401"} Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.306586 4713 scope.go:117] "RemoveContainer" containerID="98d188b72bec6a629944f665d87ccb692578ed8f10bf97ca84e57af542468341" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.406457 4713 scope.go:117] "RemoveContainer" containerID="8f4a846453e701c54c627d5f23226526867a6be178f1046e312fd831a9dbe401" Jan 26 15:57:13 crc kubenswrapper[4713]: E0126 15:57:13.409831 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f4a846453e701c54c627d5f23226526867a6be178f1046e312fd831a9dbe401\": container with ID starting with 8f4a846453e701c54c627d5f23226526867a6be178f1046e312fd831a9dbe401 not found: ID does not exist" containerID="8f4a846453e701c54c627d5f23226526867a6be178f1046e312fd831a9dbe401" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.409878 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f4a846453e701c54c627d5f23226526867a6be178f1046e312fd831a9dbe401"} err="failed to get container status \"8f4a846453e701c54c627d5f23226526867a6be178f1046e312fd831a9dbe401\": rpc error: code = NotFound desc = could not find container \"8f4a846453e701c54c627d5f23226526867a6be178f1046e312fd831a9dbe401\": container with ID starting with 8f4a846453e701c54c627d5f23226526867a6be178f1046e312fd831a9dbe401 not found: ID does not exist" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.409903 4713 scope.go:117] "RemoveContainer" containerID="98d188b72bec6a629944f665d87ccb692578ed8f10bf97ca84e57af542468341" Jan 26 15:57:13 crc kubenswrapper[4713]: E0126 15:57:13.420543 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98d188b72bec6a629944f665d87ccb692578ed8f10bf97ca84e57af542468341\": container with ID starting with 98d188b72bec6a629944f665d87ccb692578ed8f10bf97ca84e57af542468341 not found: ID does not exist" containerID="98d188b72bec6a629944f665d87ccb692578ed8f10bf97ca84e57af542468341" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.420598 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98d188b72bec6a629944f665d87ccb692578ed8f10bf97ca84e57af542468341"} err="failed to get container status \"98d188b72bec6a629944f665d87ccb692578ed8f10bf97ca84e57af542468341\": rpc error: code = NotFound desc = could not find container \"98d188b72bec6a629944f665d87ccb692578ed8f10bf97ca84e57af542468341\": container with ID starting with 98d188b72bec6a629944f665d87ccb692578ed8f10bf97ca84e57af542468341 not found: ID does not exist" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.435170 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-config" (OuterVolumeSpecName: "config") pod "8dd08527-0793-4933-bcc1-780d121ece65" (UID: "8dd08527-0793-4933-bcc1-780d121ece65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.461529 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8dd08527-0793-4933-bcc1-780d121ece65" (UID: "8dd08527-0793-4933-bcc1-780d121ece65"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.461761 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-ovsdbserver-sb\") pod \"8dd08527-0793-4933-bcc1-780d121ece65\" (UID: \"8dd08527-0793-4933-bcc1-780d121ece65\") " Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.462287 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:13 crc kubenswrapper[4713]: W0126 15:57:13.463084 4713 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8dd08527-0793-4933-bcc1-780d121ece65/volumes/kubernetes.io~configmap/ovsdbserver-sb Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.463102 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8dd08527-0793-4933-bcc1-780d121ece65" (UID: "8dd08527-0793-4933-bcc1-780d121ece65"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.486083 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8dd08527-0793-4933-bcc1-780d121ece65" (UID: "8dd08527-0793-4933-bcc1-780d121ece65"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.486104 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8dd08527-0793-4933-bcc1-780d121ece65" (UID: "8dd08527-0793-4933-bcc1-780d121ece65"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.487599 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8dd08527-0793-4933-bcc1-780d121ece65" (UID: "8dd08527-0793-4933-bcc1-780d121ece65"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.554609 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="3a992c5f-9e04-4776-8603-5c9b4def66c7" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.181:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.555124 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-587f599955-5k56n" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.563940 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.563977 4713 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.564098 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.564112 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dd08527-0793-4933-bcc1-780d121ece65-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.703537 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66cbb889bd-76zsk"] Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.703954 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ff8449c8c-tjj5v"] Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.704160 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66cbb889bd-76zsk" podUID="9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" containerName="neutron-api" containerID="cri-o://b28def777f3a3cfb8248eb3963b9717c379abbc9050e3ae059af3b0f99f1c763" gracePeriod=30 Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.704673 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66cbb889bd-76zsk" podUID="9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" containerName="neutron-httpd" containerID="cri-o://59698a5520c4575e55acb5ccb5abe8d4aaec4d15a9112979c654bda564134150" gracePeriod=30 Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.735295 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ff8449c8c-tjj5v"] Jan 26 15:57:13 crc kubenswrapper[4713]: I0126 15:57:13.824983 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dd08527-0793-4933-bcc1-780d121ece65" path="/var/lib/kubelet/pods/8dd08527-0793-4933-bcc1-780d121ece65/volumes" Jan 26 15:57:14 crc kubenswrapper[4713]: I0126 15:57:14.281406 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e2a94d3c-d963-4279-9c5b-89c52d701d33","Type":"ContainerStarted","Data":"9d32d94847475c92e6bdec901361f495d5bad467dc21c729a70bdbfa23738e24"} Jan 26 15:57:14 crc kubenswrapper[4713]: I0126 15:57:14.287622 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ccb1131b-e156-481e-a986-e6231bf9b82c","Type":"ContainerStarted","Data":"698ab889796c9f3d3d5bf409eac17338db2d15e7b38293b2132ae073586dfe09"} Jan 26 15:57:14 crc kubenswrapper[4713]: I0126 15:57:14.307980 4713 generic.go:334] "Generic (PLEG): container finished" podID="9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" containerID="59698a5520c4575e55acb5ccb5abe8d4aaec4d15a9112979c654bda564134150" exitCode=0 Jan 26 15:57:14 crc kubenswrapper[4713]: I0126 15:57:14.308045 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cbb889bd-76zsk" event={"ID":"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4","Type":"ContainerDied","Data":"59698a5520c4575e55acb5ccb5abe8d4aaec4d15a9112979c654bda564134150"} Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.353423 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e2a94d3c-d963-4279-9c5b-89c52d701d33","Type":"ContainerStarted","Data":"103fba827a5ed7ad227d663582039f70bb92cd84ea04422e9a43b578a40939d1"} Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.366185 4713 generic.go:334] "Generic (PLEG): container finished" podID="ac7af3ca-8197-48cd-8480-a0c5292c9fa6" containerID="c0c52f7042da6f8e751abae1da29cad7eaa249c53338867e8b994a88edfdf4ff" exitCode=0 Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.366242 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"ac7af3ca-8197-48cd-8480-a0c5292c9fa6","Type":"ContainerDied","Data":"c0c52f7042da6f8e751abae1da29cad7eaa249c53338867e8b994a88edfdf4ff"} Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.366266 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"ac7af3ca-8197-48cd-8480-a0c5292c9fa6","Type":"ContainerDied","Data":"c6f20cfc0d9a296e819bceb5bad1d4cd0003e4ada4504aebe843b244d85d2bfc"} Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.366277 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f20cfc0d9a296e819bceb5bad1d4cd0003e4ada4504aebe843b244d85d2bfc" Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.380752 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.381836 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ccb1131b-e156-481e-a986-e6231bf9b82c","Type":"ContainerStarted","Data":"5d0a0e729eef1663becb615eaf409358594819c531115b9d26b1bae292844130"} Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.466865 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.466834408 podStartE2EDuration="4.466834408s" podCreationTimestamp="2026-01-26 15:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:15.406022109 +0000 UTC m=+1410.543039344" watchObservedRunningTime="2026-01-26 15:57:15.466834408 +0000 UTC m=+1410.603851643" Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.498158 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.498129377 podStartE2EDuration="5.498129377s" podCreationTimestamp="2026-01-26 15:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:15.473893006 +0000 UTC m=+1410.610910251" watchObservedRunningTime="2026-01-26 15:57:15.498129377 +0000 UTC m=+1410.635146612" Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.562078 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-scripts\") pod \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.562148 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-config-data-custom\") pod \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.562286 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-combined-ca-bundle\") pod \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.562431 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-certs\") pod \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.562456 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f6l9\" (UniqueName: \"kubernetes.io/projected/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-kube-api-access-6f6l9\") pod \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.562519 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-config-data\") pod \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\" (UID: \"ac7af3ca-8197-48cd-8480-a0c5292c9fa6\") " Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.573576 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ac7af3ca-8197-48cd-8480-a0c5292c9fa6" (UID: "ac7af3ca-8197-48cd-8480-a0c5292c9fa6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.592534 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-scripts" (OuterVolumeSpecName: "scripts") pod "ac7af3ca-8197-48cd-8480-a0c5292c9fa6" (UID: "ac7af3ca-8197-48cd-8480-a0c5292c9fa6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.592713 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-certs" (OuterVolumeSpecName: "certs") pod "ac7af3ca-8197-48cd-8480-a0c5292c9fa6" (UID: "ac7af3ca-8197-48cd-8480-a0c5292c9fa6"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.610508 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-kube-api-access-6f6l9" (OuterVolumeSpecName: "kube-api-access-6f6l9") pod "ac7af3ca-8197-48cd-8480-a0c5292c9fa6" (UID: "ac7af3ca-8197-48cd-8480-a0c5292c9fa6"). InnerVolumeSpecName "kube-api-access-6f6l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.614055 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-config-data" (OuterVolumeSpecName: "config-data") pod "ac7af3ca-8197-48cd-8480-a0c5292c9fa6" (UID: "ac7af3ca-8197-48cd-8480-a0c5292c9fa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.647118 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac7af3ca-8197-48cd-8480-a0c5292c9fa6" (UID: "ac7af3ca-8197-48cd-8480-a0c5292c9fa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.665441 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.665491 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.665506 4713 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.665521 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.665533 4713 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:15 crc kubenswrapper[4713]: I0126 15:57:15.665544 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f6l9\" (UniqueName: \"kubernetes.io/projected/ac7af3ca-8197-48cd-8480-a0c5292c9fa6-kube-api-access-6f6l9\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.392700 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.442064 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.459939 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.471827 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 26 15:57:16 crc kubenswrapper[4713]: E0126 15:57:16.472285 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7af3ca-8197-48cd-8480-a0c5292c9fa6" containerName="cloudkitty-proc" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.472303 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7af3ca-8197-48cd-8480-a0c5292c9fa6" containerName="cloudkitty-proc" Jan 26 15:57:16 crc kubenswrapper[4713]: E0126 15:57:16.472316 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dd08527-0793-4933-bcc1-780d121ece65" containerName="dnsmasq-dns" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.472322 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dd08527-0793-4933-bcc1-780d121ece65" containerName="dnsmasq-dns" Jan 26 15:57:16 crc kubenswrapper[4713]: E0126 15:57:16.472346 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dd08527-0793-4933-bcc1-780d121ece65" containerName="init" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.472351 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dd08527-0793-4933-bcc1-780d121ece65" containerName="init" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.472559 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac7af3ca-8197-48cd-8480-a0c5292c9fa6" containerName="cloudkitty-proc" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.472569 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dd08527-0793-4933-bcc1-780d121ece65" containerName="dnsmasq-dns" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.473257 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.476450 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.486494 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.615904 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-56d946d655-hw5fz"] Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.617691 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.624841 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.625256 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.625929 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.628650 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-56d946d655-hw5fz"] Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.634673 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhhs6\" (UniqueName: \"kubernetes.io/projected/e64b34b6-9839-4ef8-83fb-7bb963c865aa-kube-api-access-bhhs6\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.634725 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.635448 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e64b34b6-9839-4ef8-83fb-7bb963c865aa-certs\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.637595 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.637680 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-scripts\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.637723 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-config-data\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.740921 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-combined-ca-bundle\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.740973 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-public-tls-certs\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.741039 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-run-httpd\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.741092 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-log-httpd\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.741120 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-etc-swift\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.741152 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhhs6\" (UniqueName: \"kubernetes.io/projected/e64b34b6-9839-4ef8-83fb-7bb963c865aa-kube-api-access-bhhs6\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.741182 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-config-data\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.741210 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.741247 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bxv7\" (UniqueName: \"kubernetes.io/projected/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-kube-api-access-9bxv7\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.741333 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e64b34b6-9839-4ef8-83fb-7bb963c865aa-certs\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.741389 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.741431 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-scripts\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.741469 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-config-data\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.741501 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-internal-tls-certs\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.747743 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.750959 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-scripts\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.751221 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.760018 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e64b34b6-9839-4ef8-83fb-7bb963c865aa-certs\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.768269 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-config-data\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.783190 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhhs6\" (UniqueName: \"kubernetes.io/projected/e64b34b6-9839-4ef8-83fb-7bb963c865aa-kube-api-access-bhhs6\") pod \"cloudkitty-proc-0\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.843511 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-combined-ca-bundle\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.843556 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-public-tls-certs\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.843594 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-run-httpd\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.843632 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-log-httpd\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.843647 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-etc-swift\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.843673 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-config-data\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.843698 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bxv7\" (UniqueName: \"kubernetes.io/projected/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-kube-api-access-9bxv7\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.843783 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-internal-tls-certs\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.844732 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-log-httpd\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.845034 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-run-httpd\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.850469 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-combined-ca-bundle\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.857509 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-config-data\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.865970 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-internal-tls-certs\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.868242 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-public-tls-certs\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.872677 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bxv7\" (UniqueName: \"kubernetes.io/projected/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-kube-api-access-9bxv7\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.873436 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.876906 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0e6ea6b1-cd00-4552-8a20-cfb0055b58dc-etc-swift\") pod \"swift-proxy-56d946d655-hw5fz\" (UID: \"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc\") " pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:16 crc kubenswrapper[4713]: I0126 15:57:16.967499 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:17 crc kubenswrapper[4713]: I0126 15:57:17.225737 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 15:57:17 crc kubenswrapper[4713]: I0126 15:57:17.415633 4713 generic.go:334] "Generic (PLEG): container finished" podID="9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" containerID="b28def777f3a3cfb8248eb3963b9717c379abbc9050e3ae059af3b0f99f1c763" exitCode=0 Jan 26 15:57:17 crc kubenswrapper[4713]: I0126 15:57:17.415690 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cbb889bd-76zsk" event={"ID":"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4","Type":"ContainerDied","Data":"b28def777f3a3cfb8248eb3963b9717c379abbc9050e3ae059af3b0f99f1c763"} Jan 26 15:57:17 crc kubenswrapper[4713]: I0126 15:57:17.653508 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:17 crc kubenswrapper[4713]: I0126 15:57:17.654472 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="ceilometer-central-agent" containerID="cri-o://f453747697c311ff4228199277fa77815194ea55007f5ee432753d6e678e17e5" gracePeriod=30 Jan 26 15:57:17 crc kubenswrapper[4713]: I0126 15:57:17.654490 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="proxy-httpd" containerID="cri-o://45cf857e171f6445dd724e6b045541a45bfd6643449441d4952734261a764ce4" gracePeriod=30 Jan 26 15:57:17 crc kubenswrapper[4713]: I0126 15:57:17.654632 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="ceilometer-notification-agent" containerID="cri-o://5e883bb238f72009a179e25f3cf3f01df40f37532bf429489a7814d1be58f054" gracePeriod=30 Jan 26 15:57:17 crc kubenswrapper[4713]: I0126 15:57:17.654683 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="sg-core" containerID="cri-o://e5351d0edef600e45544d3e2e26fec9d54e7b7f4ddfc49e3aed45ff766d10a7f" gracePeriod=30 Jan 26 15:57:17 crc kubenswrapper[4713]: I0126 15:57:17.668142 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.180:3000/\": EOF" Jan 26 15:57:17 crc kubenswrapper[4713]: I0126 15:57:17.707149 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 26 15:57:17 crc kubenswrapper[4713]: I0126 15:57:17.857211 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac7af3ca-8197-48cd-8480-a0c5292c9fa6" path="/var/lib/kubelet/pods/ac7af3ca-8197-48cd-8480-a0c5292c9fa6/volumes" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.126440 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-56d946d655-hw5fz"] Jan 26 15:57:18 crc kubenswrapper[4713]: W0126 15:57:18.149843 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e6ea6b1_cd00_4552_8a20_cfb0055b58dc.slice/crio-f1b4c2c7d18a6eaadda648d0b1500168fc21b00fe17800a9d4f56f502f2b0b7b WatchSource:0}: Error finding container f1b4c2c7d18a6eaadda648d0b1500168fc21b00fe17800a9d4f56f502f2b0b7b: Status 404 returned error can't find the container with id f1b4c2c7d18a6eaadda648d0b1500168fc21b00fe17800a9d4f56f502f2b0b7b Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.185427 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.190027 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.285554 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-config\") pod \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.285694 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8c7t\" (UniqueName: \"kubernetes.io/projected/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-kube-api-access-w8c7t\") pod \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.285759 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-combined-ca-bundle\") pod \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.285783 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-httpd-config\") pod \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.285869 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-ovndb-tls-certs\") pod \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\" (UID: \"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4\") " Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.298930 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" (UID: "9f5192b4-8f6f-4813-9e45-b9f03a6e47e4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.333262 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-kube-api-access-w8c7t" (OuterVolumeSpecName: "kube-api-access-w8c7t") pod "9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" (UID: "9f5192b4-8f6f-4813-9e45-b9f03a6e47e4"). InnerVolumeSpecName "kube-api-access-w8c7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.393499 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8c7t\" (UniqueName: \"kubernetes.io/projected/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-kube-api-access-w8c7t\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.393585 4713 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.398762 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" (UID: "9f5192b4-8f6f-4813-9e45-b9f03a6e47e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.409865 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-config" (OuterVolumeSpecName: "config") pod "9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" (UID: "9f5192b4-8f6f-4813-9e45-b9f03a6e47e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.431225 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cbb889bd-76zsk" event={"ID":"9f5192b4-8f6f-4813-9e45-b9f03a6e47e4","Type":"ContainerDied","Data":"a9a8633898422c3f24e3855b554330da6bd513111cbe6b23f404dbe1d9aa5337"} Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.431284 4713 scope.go:117] "RemoveContainer" containerID="59698a5520c4575e55acb5ccb5abe8d4aaec4d15a9112979c654bda564134150" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.431452 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66cbb889bd-76zsk" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.440124 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" (UID: "9f5192b4-8f6f-4813-9e45-b9f03a6e47e4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.440498 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56d946d655-hw5fz" event={"ID":"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc","Type":"ContainerStarted","Data":"f1b4c2c7d18a6eaadda648d0b1500168fc21b00fe17800a9d4f56f502f2b0b7b"} Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.455474 4713 generic.go:334] "Generic (PLEG): container finished" podID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerID="45cf857e171f6445dd724e6b045541a45bfd6643449441d4952734261a764ce4" exitCode=0 Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.455505 4713 generic.go:334] "Generic (PLEG): container finished" podID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerID="e5351d0edef600e45544d3e2e26fec9d54e7b7f4ddfc49e3aed45ff766d10a7f" exitCode=2 Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.455512 4713 generic.go:334] "Generic (PLEG): container finished" podID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerID="f453747697c311ff4228199277fa77815194ea55007f5ee432753d6e678e17e5" exitCode=0 Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.455560 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6","Type":"ContainerDied","Data":"45cf857e171f6445dd724e6b045541a45bfd6643449441d4952734261a764ce4"} Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.455589 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6","Type":"ContainerDied","Data":"e5351d0edef600e45544d3e2e26fec9d54e7b7f4ddfc49e3aed45ff766d10a7f"} Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.455599 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6","Type":"ContainerDied","Data":"f453747697c311ff4228199277fa77815194ea55007f5ee432753d6e678e17e5"} Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.457917 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"e64b34b6-9839-4ef8-83fb-7bb963c865aa","Type":"ContainerStarted","Data":"dbc1f5d6023a0912a139e284ac46f7f930ca7fbe2a257dc097ee9198291a9e19"} Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.457969 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"e64b34b6-9839-4ef8-83fb-7bb963c865aa","Type":"ContainerStarted","Data":"2c27a14630b97652c4d153e73a464d65429ed5e2fc0acd22639e22c94afb7fe5"} Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.470687 4713 scope.go:117] "RemoveContainer" containerID="b28def777f3a3cfb8248eb3963b9717c379abbc9050e3ae059af3b0f99f1c763" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.492146 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.492113126 podStartE2EDuration="2.492113126s" podCreationTimestamp="2026-01-26 15:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:18.482780874 +0000 UTC m=+1413.619798109" watchObservedRunningTime="2026-01-26 15:57:18.492113126 +0000 UTC m=+1413.629130371" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.496094 4713 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.496142 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.496158 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.858664 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66cbb889bd-76zsk"] Jan 26 15:57:18 crc kubenswrapper[4713]: I0126 15:57:18.874574 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-66cbb889bd-76zsk"] Jan 26 15:57:19 crc kubenswrapper[4713]: I0126 15:57:19.472879 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56d946d655-hw5fz" event={"ID":"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc","Type":"ContainerStarted","Data":"e86661f2a08f982874ee3ca632480b3e612e516b30acab2d67ccc81ac5d60fbf"} Jan 26 15:57:19 crc kubenswrapper[4713]: I0126 15:57:19.472949 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:19 crc kubenswrapper[4713]: I0126 15:57:19.472968 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:19 crc kubenswrapper[4713]: I0126 15:57:19.472980 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56d946d655-hw5fz" event={"ID":"0e6ea6b1-cd00-4552-8a20-cfb0055b58dc","Type":"ContainerStarted","Data":"bd9e80bae941fca66ed359c1055455abfbcda878f8946ca4e4bb877f18543c97"} Jan 26 15:57:19 crc kubenswrapper[4713]: I0126 15:57:19.499555 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-56d946d655-hw5fz" podStartSLOduration=3.499538176 podStartE2EDuration="3.499538176s" podCreationTimestamp="2026-01-26 15:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:19.492620182 +0000 UTC m=+1414.629637417" watchObservedRunningTime="2026-01-26 15:57:19.499538176 +0000 UTC m=+1414.636555411" Jan 26 15:57:19 crc kubenswrapper[4713]: I0126 15:57:19.821987 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" path="/var/lib/kubelet/pods/9f5192b4-8f6f-4813-9e45-b9f03a6e47e4/volumes" Jan 26 15:57:20 crc kubenswrapper[4713]: I0126 15:57:20.195264 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:57:20 crc kubenswrapper[4713]: I0126 15:57:20.196032 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ccb1131b-e156-481e-a986-e6231bf9b82c" containerName="glance-log" containerID="cri-o://698ab889796c9f3d3d5bf409eac17338db2d15e7b38293b2132ae073586dfe09" gracePeriod=30 Jan 26 15:57:20 crc kubenswrapper[4713]: I0126 15:57:20.196140 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ccb1131b-e156-481e-a986-e6231bf9b82c" containerName="glance-httpd" containerID="cri-o://5d0a0e729eef1663becb615eaf409358594819c531115b9d26b1bae292844130" gracePeriod=30 Jan 26 15:57:20 crc kubenswrapper[4713]: I0126 15:57:20.493186 4713 generic.go:334] "Generic (PLEG): container finished" podID="ccb1131b-e156-481e-a986-e6231bf9b82c" containerID="5d0a0e729eef1663becb615eaf409358594819c531115b9d26b1bae292844130" exitCode=0 Jan 26 15:57:20 crc kubenswrapper[4713]: I0126 15:57:20.493231 4713 generic.go:334] "Generic (PLEG): container finished" podID="ccb1131b-e156-481e-a986-e6231bf9b82c" containerID="698ab889796c9f3d3d5bf409eac17338db2d15e7b38293b2132ae073586dfe09" exitCode=143 Jan 26 15:57:20 crc kubenswrapper[4713]: I0126 15:57:20.494462 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ccb1131b-e156-481e-a986-e6231bf9b82c","Type":"ContainerDied","Data":"5d0a0e729eef1663becb615eaf409358594819c531115b9d26b1bae292844130"} Jan 26 15:57:20 crc kubenswrapper[4713]: I0126 15:57:20.494501 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ccb1131b-e156-481e-a986-e6231bf9b82c","Type":"ContainerDied","Data":"698ab889796c9f3d3d5bf409eac17338db2d15e7b38293b2132ae073586dfe09"} Jan 26 15:57:21 crc kubenswrapper[4713]: I0126 15:57:21.545415 4713 generic.go:334] "Generic (PLEG): container finished" podID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerID="5e883bb238f72009a179e25f3cf3f01df40f37532bf429489a7814d1be58f054" exitCode=0 Jan 26 15:57:21 crc kubenswrapper[4713]: I0126 15:57:21.545766 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6","Type":"ContainerDied","Data":"5e883bb238f72009a179e25f3cf3f01df40f37532bf429489a7814d1be58f054"} Jan 26 15:57:21 crc kubenswrapper[4713]: I0126 15:57:21.627675 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:21 crc kubenswrapper[4713]: I0126 15:57:21.627712 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:21 crc kubenswrapper[4713]: I0126 15:57:21.702060 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:21 crc kubenswrapper[4713]: I0126 15:57:21.708063 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:22 crc kubenswrapper[4713]: I0126 15:57:22.555245 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:22 crc kubenswrapper[4713]: I0126 15:57:22.555568 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:23 crc kubenswrapper[4713]: I0126 15:57:23.135854 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.180:3000/\": dial tcp 10.217.0.180:3000: connect: connection refused" Jan 26 15:57:24 crc kubenswrapper[4713]: I0126 15:57:24.673839 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:57:24 crc kubenswrapper[4713]: I0126 15:57:24.674178 4713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:57:24 crc kubenswrapper[4713]: I0126 15:57:24.674189 4713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:57:24 crc kubenswrapper[4713]: I0126 15:57:24.674443 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e2a94d3c-d963-4279-9c5b-89c52d701d33" containerName="glance-log" containerID="cri-o://9d32d94847475c92e6bdec901361f495d5bad467dc21c729a70bdbfa23738e24" gracePeriod=30 Jan 26 15:57:24 crc kubenswrapper[4713]: I0126 15:57:24.674567 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e2a94d3c-d963-4279-9c5b-89c52d701d33" containerName="glance-httpd" containerID="cri-o://103fba827a5ed7ad227d663582039f70bb92cd84ea04422e9a43b578a40939d1" gracePeriod=30 Jan 26 15:57:24 crc kubenswrapper[4713]: I0126 15:57:24.687220 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="e2a94d3c-d963-4279-9c5b-89c52d701d33" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.191:9292/healthcheck\": EOF" Jan 26 15:57:24 crc kubenswrapper[4713]: I0126 15:57:24.691640 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="e2a94d3c-d963-4279-9c5b-89c52d701d33" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.191:9292/healthcheck\": EOF" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.332679 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-2mmsz"] Jan 26 15:57:25 crc kubenswrapper[4713]: E0126 15:57:25.333431 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" containerName="neutron-httpd" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.333448 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" containerName="neutron-httpd" Jan 26 15:57:25 crc kubenswrapper[4713]: E0126 15:57:25.333476 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" containerName="neutron-api" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.333482 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" containerName="neutron-api" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.334903 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" containerName="neutron-httpd" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.334939 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f5192b4-8f6f-4813-9e45-b9f03a6e47e4" containerName="neutron-api" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.335997 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2mmsz" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.363677 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2mmsz"] Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.467779 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vh94\" (UniqueName: \"kubernetes.io/projected/dc98b501-5b02-49f5-a1e3-2543e981eab8-kube-api-access-2vh94\") pod \"nova-api-db-create-2mmsz\" (UID: \"dc98b501-5b02-49f5-a1e3-2543e981eab8\") " pod="openstack/nova-api-db-create-2mmsz" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.468233 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc98b501-5b02-49f5-a1e3-2543e981eab8-operator-scripts\") pod \"nova-api-db-create-2mmsz\" (UID: \"dc98b501-5b02-49f5-a1e3-2543e981eab8\") " pod="openstack/nova-api-db-create-2mmsz" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.545339 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3e94-account-create-update-qmnfk"] Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.547063 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3e94-account-create-update-qmnfk" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.551045 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.554055 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3e94-account-create-update-qmnfk"] Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.569604 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc98b501-5b02-49f5-a1e3-2543e981eab8-operator-scripts\") pod \"nova-api-db-create-2mmsz\" (UID: \"dc98b501-5b02-49f5-a1e3-2543e981eab8\") " pod="openstack/nova-api-db-create-2mmsz" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.569778 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vh94\" (UniqueName: \"kubernetes.io/projected/dc98b501-5b02-49f5-a1e3-2543e981eab8-kube-api-access-2vh94\") pod \"nova-api-db-create-2mmsz\" (UID: \"dc98b501-5b02-49f5-a1e3-2543e981eab8\") " pod="openstack/nova-api-db-create-2mmsz" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.570810 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc98b501-5b02-49f5-a1e3-2543e981eab8-operator-scripts\") pod \"nova-api-db-create-2mmsz\" (UID: \"dc98b501-5b02-49f5-a1e3-2543e981eab8\") " pod="openstack/nova-api-db-create-2mmsz" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.602380 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vh94\" (UniqueName: \"kubernetes.io/projected/dc98b501-5b02-49f5-a1e3-2543e981eab8-kube-api-access-2vh94\") pod \"nova-api-db-create-2mmsz\" (UID: \"dc98b501-5b02-49f5-a1e3-2543e981eab8\") " pod="openstack/nova-api-db-create-2mmsz" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.603398 4713 generic.go:334] "Generic (PLEG): container finished" podID="e2a94d3c-d963-4279-9c5b-89c52d701d33" containerID="9d32d94847475c92e6bdec901361f495d5bad467dc21c729a70bdbfa23738e24" exitCode=143 Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.603452 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e2a94d3c-d963-4279-9c5b-89c52d701d33","Type":"ContainerDied","Data":"9d32d94847475c92e6bdec901361f495d5bad467dc21c729a70bdbfa23738e24"} Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.663403 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-pdtm9"] Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.666799 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pdtm9" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.671347 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgsdp\" (UniqueName: \"kubernetes.io/projected/97311fac-af74-45cb-ad3a-c7a67efaf219-kube-api-access-dgsdp\") pod \"nova-api-3e94-account-create-update-qmnfk\" (UID: \"97311fac-af74-45cb-ad3a-c7a67efaf219\") " pod="openstack/nova-api-3e94-account-create-update-qmnfk" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.671763 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97311fac-af74-45cb-ad3a-c7a67efaf219-operator-scripts\") pod \"nova-api-3e94-account-create-update-qmnfk\" (UID: \"97311fac-af74-45cb-ad3a-c7a67efaf219\") " pod="openstack/nova-api-3e94-account-create-update-qmnfk" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.686191 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pdtm9"] Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.694047 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2mmsz" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.754933 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-bxdgl"] Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.756328 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bxdgl" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.777481 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgsdp\" (UniqueName: \"kubernetes.io/projected/97311fac-af74-45cb-ad3a-c7a67efaf219-kube-api-access-dgsdp\") pod \"nova-api-3e94-account-create-update-qmnfk\" (UID: \"97311fac-af74-45cb-ad3a-c7a67efaf219\") " pod="openstack/nova-api-3e94-account-create-update-qmnfk" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.777580 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85hxt\" (UniqueName: \"kubernetes.io/projected/4970397f-f884-40ad-bca5-6c272f27ab4f-kube-api-access-85hxt\") pod \"nova-cell0-db-create-pdtm9\" (UID: \"4970397f-f884-40ad-bca5-6c272f27ab4f\") " pod="openstack/nova-cell0-db-create-pdtm9" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.777641 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97311fac-af74-45cb-ad3a-c7a67efaf219-operator-scripts\") pod \"nova-api-3e94-account-create-update-qmnfk\" (UID: \"97311fac-af74-45cb-ad3a-c7a67efaf219\") " pod="openstack/nova-api-3e94-account-create-update-qmnfk" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.777770 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4970397f-f884-40ad-bca5-6c272f27ab4f-operator-scripts\") pod \"nova-cell0-db-create-pdtm9\" (UID: \"4970397f-f884-40ad-bca5-6c272f27ab4f\") " pod="openstack/nova-cell0-db-create-pdtm9" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.779954 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97311fac-af74-45cb-ad3a-c7a67efaf219-operator-scripts\") pod \"nova-api-3e94-account-create-update-qmnfk\" (UID: \"97311fac-af74-45cb-ad3a-c7a67efaf219\") " pod="openstack/nova-api-3e94-account-create-update-qmnfk" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.801254 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bxdgl"] Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.819423 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgsdp\" (UniqueName: \"kubernetes.io/projected/97311fac-af74-45cb-ad3a-c7a67efaf219-kube-api-access-dgsdp\") pod \"nova-api-3e94-account-create-update-qmnfk\" (UID: \"97311fac-af74-45cb-ad3a-c7a67efaf219\") " pod="openstack/nova-api-3e94-account-create-update-qmnfk" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.837334 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-19b6-account-create-update-h4s97"] Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.841955 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-19b6-account-create-update-h4s97" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.847820 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.848583 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-19b6-account-create-update-h4s97"] Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.881450 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d659c0-5c5b-461f-89d0-435b02bd409b-operator-scripts\") pod \"nova-cell1-db-create-bxdgl\" (UID: \"02d659c0-5c5b-461f-89d0-435b02bd409b\") " pod="openstack/nova-cell1-db-create-bxdgl" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.882239 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85hxt\" (UniqueName: \"kubernetes.io/projected/4970397f-f884-40ad-bca5-6c272f27ab4f-kube-api-access-85hxt\") pod \"nova-cell0-db-create-pdtm9\" (UID: \"4970397f-f884-40ad-bca5-6c272f27ab4f\") " pod="openstack/nova-cell0-db-create-pdtm9" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.883864 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4970397f-f884-40ad-bca5-6c272f27ab4f-operator-scripts\") pod \"nova-cell0-db-create-pdtm9\" (UID: \"4970397f-f884-40ad-bca5-6c272f27ab4f\") " pod="openstack/nova-cell0-db-create-pdtm9" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.884442 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56n2b\" (UniqueName: \"kubernetes.io/projected/02d659c0-5c5b-461f-89d0-435b02bd409b-kube-api-access-56n2b\") pod \"nova-cell1-db-create-bxdgl\" (UID: \"02d659c0-5c5b-461f-89d0-435b02bd409b\") " pod="openstack/nova-cell1-db-create-bxdgl" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.886458 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3e94-account-create-update-qmnfk" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.889969 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4970397f-f884-40ad-bca5-6c272f27ab4f-operator-scripts\") pod \"nova-cell0-db-create-pdtm9\" (UID: \"4970397f-f884-40ad-bca5-6c272f27ab4f\") " pod="openstack/nova-cell0-db-create-pdtm9" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.908238 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85hxt\" (UniqueName: \"kubernetes.io/projected/4970397f-f884-40ad-bca5-6c272f27ab4f-kube-api-access-85hxt\") pod \"nova-cell0-db-create-pdtm9\" (UID: \"4970397f-f884-40ad-bca5-6c272f27ab4f\") " pod="openstack/nova-cell0-db-create-pdtm9" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.966141 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-57ae-account-create-update-qv9q4"] Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.967932 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-57ae-account-create-update-qv9q4" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.971119 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.988192 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/832abf5e-06a6-4e5f-8d93-0e91eefdb0de-operator-scripts\") pod \"nova-cell0-19b6-account-create-update-h4s97\" (UID: \"832abf5e-06a6-4e5f-8d93-0e91eefdb0de\") " pod="openstack/nova-cell0-19b6-account-create-update-h4s97" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.988407 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56n2b\" (UniqueName: \"kubernetes.io/projected/02d659c0-5c5b-461f-89d0-435b02bd409b-kube-api-access-56n2b\") pod \"nova-cell1-db-create-bxdgl\" (UID: \"02d659c0-5c5b-461f-89d0-435b02bd409b\") " pod="openstack/nova-cell1-db-create-bxdgl" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.988488 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r8ck\" (UniqueName: \"kubernetes.io/projected/832abf5e-06a6-4e5f-8d93-0e91eefdb0de-kube-api-access-7r8ck\") pod \"nova-cell0-19b6-account-create-update-h4s97\" (UID: \"832abf5e-06a6-4e5f-8d93-0e91eefdb0de\") " pod="openstack/nova-cell0-19b6-account-create-update-h4s97" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.988618 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d659c0-5c5b-461f-89d0-435b02bd409b-operator-scripts\") pod \"nova-cell1-db-create-bxdgl\" (UID: \"02d659c0-5c5b-461f-89d0-435b02bd409b\") " pod="openstack/nova-cell1-db-create-bxdgl" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.989573 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d659c0-5c5b-461f-89d0-435b02bd409b-operator-scripts\") pod \"nova-cell1-db-create-bxdgl\" (UID: \"02d659c0-5c5b-461f-89d0-435b02bd409b\") " pod="openstack/nova-cell1-db-create-bxdgl" Jan 26 15:57:25 crc kubenswrapper[4713]: I0126 15:57:25.994000 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-57ae-account-create-update-qv9q4"] Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.012666 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pdtm9" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.032259 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56n2b\" (UniqueName: \"kubernetes.io/projected/02d659c0-5c5b-461f-89d0-435b02bd409b-kube-api-access-56n2b\") pod \"nova-cell1-db-create-bxdgl\" (UID: \"02d659c0-5c5b-461f-89d0-435b02bd409b\") " pod="openstack/nova-cell1-db-create-bxdgl" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.090809 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd-operator-scripts\") pod \"nova-cell1-57ae-account-create-update-qv9q4\" (UID: \"a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd\") " pod="openstack/nova-cell1-57ae-account-create-update-qv9q4" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.090937 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r8ck\" (UniqueName: \"kubernetes.io/projected/832abf5e-06a6-4e5f-8d93-0e91eefdb0de-kube-api-access-7r8ck\") pod \"nova-cell0-19b6-account-create-update-h4s97\" (UID: \"832abf5e-06a6-4e5f-8d93-0e91eefdb0de\") " pod="openstack/nova-cell0-19b6-account-create-update-h4s97" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.091072 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/832abf5e-06a6-4e5f-8d93-0e91eefdb0de-operator-scripts\") pod \"nova-cell0-19b6-account-create-update-h4s97\" (UID: \"832abf5e-06a6-4e5f-8d93-0e91eefdb0de\") " pod="openstack/nova-cell0-19b6-account-create-update-h4s97" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.091134 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57g4x\" (UniqueName: \"kubernetes.io/projected/a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd-kube-api-access-57g4x\") pod \"nova-cell1-57ae-account-create-update-qv9q4\" (UID: \"a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd\") " pod="openstack/nova-cell1-57ae-account-create-update-qv9q4" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.091818 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bxdgl" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.092004 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/832abf5e-06a6-4e5f-8d93-0e91eefdb0de-operator-scripts\") pod \"nova-cell0-19b6-account-create-update-h4s97\" (UID: \"832abf5e-06a6-4e5f-8d93-0e91eefdb0de\") " pod="openstack/nova-cell0-19b6-account-create-update-h4s97" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.121989 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r8ck\" (UniqueName: \"kubernetes.io/projected/832abf5e-06a6-4e5f-8d93-0e91eefdb0de-kube-api-access-7r8ck\") pod \"nova-cell0-19b6-account-create-update-h4s97\" (UID: \"832abf5e-06a6-4e5f-8d93-0e91eefdb0de\") " pod="openstack/nova-cell0-19b6-account-create-update-h4s97" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.186934 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-19b6-account-create-update-h4s97" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.192796 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57g4x\" (UniqueName: \"kubernetes.io/projected/a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd-kube-api-access-57g4x\") pod \"nova-cell1-57ae-account-create-update-qv9q4\" (UID: \"a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd\") " pod="openstack/nova-cell1-57ae-account-create-update-qv9q4" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.192866 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd-operator-scripts\") pod \"nova-cell1-57ae-account-create-update-qv9q4\" (UID: \"a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd\") " pod="openstack/nova-cell1-57ae-account-create-update-qv9q4" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.193919 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd-operator-scripts\") pod \"nova-cell1-57ae-account-create-update-qv9q4\" (UID: \"a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd\") " pod="openstack/nova-cell1-57ae-account-create-update-qv9q4" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.213028 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57g4x\" (UniqueName: \"kubernetes.io/projected/a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd-kube-api-access-57g4x\") pod \"nova-cell1-57ae-account-create-update-qv9q4\" (UID: \"a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd\") " pod="openstack/nova-cell1-57ae-account-create-update-qv9q4" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.302898 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-57ae-account-create-update-qv9q4" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.978788 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:26 crc kubenswrapper[4713]: I0126 15:57:26.992798 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-56d946d655-hw5fz" Jan 26 15:57:27 crc kubenswrapper[4713]: I0126 15:57:27.992596 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:27 crc kubenswrapper[4713]: I0126 15:57:27.994584 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:29 crc kubenswrapper[4713]: E0126 15:57:29.613624 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 26 15:57:29 crc kubenswrapper[4713]: E0126 15:57:29.614018 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64bh5dbh5c4h5b8hcdh5f7h89h64dh5c6h7bhf9h9h578hc7h6dh67fh585h556h59ch7fh65h89hf4h75h59bh5bch5fdh685h6bh5cdh689h688q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4cgvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(5ee23a80-20ad-45b5-9670-c165085175ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:57:29 crc kubenswrapper[4713]: E0126 15:57:29.615144 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="5ee23a80-20ad-45b5-9670-c165085175ab" Jan 26 15:57:29 crc kubenswrapper[4713]: E0126 15:57:29.676487 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="5ee23a80-20ad-45b5-9670-c165085175ab" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.133853 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.214338 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-combined-ca-bundle\") pod \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.214810 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdxwx\" (UniqueName: \"kubernetes.io/projected/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-kube-api-access-pdxwx\") pod \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.214838 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-config-data\") pod \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.214874 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-log-httpd\") pod \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.214922 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-run-httpd\") pod \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.214961 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-sg-core-conf-yaml\") pod \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.215019 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-scripts\") pod \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\" (UID: \"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6\") " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.215606 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" (UID: "5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.215990 4713 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.216572 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" (UID: "5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.234090 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-scripts" (OuterVolumeSpecName: "scripts") pod "5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" (UID: "5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.245357 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-kube-api-access-pdxwx" (OuterVolumeSpecName: "kube-api-access-pdxwx") pod "5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" (UID: "5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6"). InnerVolumeSpecName "kube-api-access-pdxwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.266704 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" (UID: "5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.318087 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdxwx\" (UniqueName: \"kubernetes.io/projected/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-kube-api-access-pdxwx\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.318145 4713 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.318163 4713 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.318182 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.333752 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.350088 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" (UID: "5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.410790 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-config-data" (OuterVolumeSpecName: "config-data") pod "5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" (UID: "5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.420388 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2lsj\" (UniqueName: \"kubernetes.io/projected/ccb1131b-e156-481e-a986-e6231bf9b82c-kube-api-access-n2lsj\") pod \"ccb1131b-e156-481e-a986-e6231bf9b82c\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.420450 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-combined-ca-bundle\") pod \"ccb1131b-e156-481e-a986-e6231bf9b82c\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.420509 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-public-tls-certs\") pod \"ccb1131b-e156-481e-a986-e6231bf9b82c\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.420562 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb1131b-e156-481e-a986-e6231bf9b82c-logs\") pod \"ccb1131b-e156-481e-a986-e6231bf9b82c\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.420735 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") pod \"ccb1131b-e156-481e-a986-e6231bf9b82c\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.420762 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-config-data\") pod \"ccb1131b-e156-481e-a986-e6231bf9b82c\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.420880 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ccb1131b-e156-481e-a986-e6231bf9b82c-httpd-run\") pod \"ccb1131b-e156-481e-a986-e6231bf9b82c\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.420943 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-scripts\") pod \"ccb1131b-e156-481e-a986-e6231bf9b82c\" (UID: \"ccb1131b-e156-481e-a986-e6231bf9b82c\") " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.421061 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccb1131b-e156-481e-a986-e6231bf9b82c-logs" (OuterVolumeSpecName: "logs") pod "ccb1131b-e156-481e-a986-e6231bf9b82c" (UID: "ccb1131b-e156-481e-a986-e6231bf9b82c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.421528 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccb1131b-e156-481e-a986-e6231bf9b82c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ccb1131b-e156-481e-a986-e6231bf9b82c" (UID: "ccb1131b-e156-481e-a986-e6231bf9b82c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.421748 4713 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ccb1131b-e156-481e-a986-e6231bf9b82c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.421777 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.421790 4713 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb1131b-e156-481e-a986-e6231bf9b82c-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.421804 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.424558 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccb1131b-e156-481e-a986-e6231bf9b82c-kube-api-access-n2lsj" (OuterVolumeSpecName: "kube-api-access-n2lsj") pod "ccb1131b-e156-481e-a986-e6231bf9b82c" (UID: "ccb1131b-e156-481e-a986-e6231bf9b82c"). InnerVolumeSpecName "kube-api-access-n2lsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.432405 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-scripts" (OuterVolumeSpecName: "scripts") pod "ccb1131b-e156-481e-a986-e6231bf9b82c" (UID: "ccb1131b-e156-481e-a986-e6231bf9b82c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.443532 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913" (OuterVolumeSpecName: "glance") pod "ccb1131b-e156-481e-a986-e6231bf9b82c" (UID: "ccb1131b-e156-481e-a986-e6231bf9b82c"). InnerVolumeSpecName "pvc-ee75bc78-62c3-4a56-b6d6-deef53255913". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.450143 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccb1131b-e156-481e-a986-e6231bf9b82c" (UID: "ccb1131b-e156-481e-a986-e6231bf9b82c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.474066 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-config-data" (OuterVolumeSpecName: "config-data") pod "ccb1131b-e156-481e-a986-e6231bf9b82c" (UID: "ccb1131b-e156-481e-a986-e6231bf9b82c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.486612 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ccb1131b-e156-481e-a986-e6231bf9b82c" (UID: "ccb1131b-e156-481e-a986-e6231bf9b82c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.524181 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2lsj\" (UniqueName: \"kubernetes.io/projected/ccb1131b-e156-481e-a986-e6231bf9b82c-kube-api-access-n2lsj\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.524540 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.524602 4713 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.524685 4713 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") on node \"crc\" " Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.524745 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.524802 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb1131b-e156-481e-a986-e6231bf9b82c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.556876 4713 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.557105 4713 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ee75bc78-62c3-4a56-b6d6-deef53255913" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913") on node "crc" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.634354 4713 reconciler_common.go:293] "Volume detached for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.691527 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.691840 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ccb1131b-e156-481e-a986-e6231bf9b82c","Type":"ContainerDied","Data":"6919bf2565ee558dadd5415f859ba912efdba54d56b7486fb7d07e0f86a7cdcc"} Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.692227 4713 scope.go:117] "RemoveContainer" containerID="5d0a0e729eef1663becb615eaf409358594819c531115b9d26b1bae292844130" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.707627 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6","Type":"ContainerDied","Data":"4cd3c2db2abb77d0f51f17fe552f96b08089e0847ba4924979b7d81158f75597"} Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.707790 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.784686 4713 scope.go:117] "RemoveContainer" containerID="698ab889796c9f3d3d5bf409eac17338db2d15e7b38293b2132ae073586dfe09" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.785437 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.811736 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.812619 4713 scope.go:117] "RemoveContainer" containerID="45cf857e171f6445dd724e6b045541a45bfd6643449441d4952734261a764ce4" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.829304 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.836413 4713 scope.go:117] "RemoveContainer" containerID="e5351d0edef600e45544d3e2e26fec9d54e7b7f4ddfc49e3aed45ff766d10a7f" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.840897 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:57:30 crc kubenswrapper[4713]: E0126 15:57:30.841350 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="ceilometer-central-agent" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.841387 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="ceilometer-central-agent" Jan 26 15:57:30 crc kubenswrapper[4713]: E0126 15:57:30.841410 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb1131b-e156-481e-a986-e6231bf9b82c" containerName="glance-httpd" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.841418 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb1131b-e156-481e-a986-e6231bf9b82c" containerName="glance-httpd" Jan 26 15:57:30 crc kubenswrapper[4713]: E0126 15:57:30.841434 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="ceilometer-notification-agent" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.841440 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="ceilometer-notification-agent" Jan 26 15:57:30 crc kubenswrapper[4713]: E0126 15:57:30.841452 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="sg-core" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.841458 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="sg-core" Jan 26 15:57:30 crc kubenswrapper[4713]: E0126 15:57:30.841479 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb1131b-e156-481e-a986-e6231bf9b82c" containerName="glance-log" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.841484 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb1131b-e156-481e-a986-e6231bf9b82c" containerName="glance-log" Jan 26 15:57:30 crc kubenswrapper[4713]: E0126 15:57:30.841500 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="proxy-httpd" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.841507 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="proxy-httpd" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.841687 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="sg-core" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.841700 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="ceilometer-notification-agent" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.841711 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb1131b-e156-481e-a986-e6231bf9b82c" containerName="glance-httpd" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.841726 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="proxy-httpd" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.841739 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb1131b-e156-481e-a986-e6231bf9b82c" containerName="glance-log" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.841748 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" containerName="ceilometer-central-agent" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.842938 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.844997 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.845186 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.855982 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.868336 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.885246 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.891293 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.893321 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.893797 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.912333 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-57ae-account-create-update-qv9q4"] Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.918894 4713 scope.go:117] "RemoveContainer" containerID="5e883bb238f72009a179e25f3cf3f01df40f37532bf429489a7814d1be58f054" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.928374 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.954507 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad16ac21-9aee-4776-b4fb-cb51324f625f-config-data\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.954577 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad16ac21-9aee-4776-b4fb-cb51324f625f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.954603 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnl68\" (UniqueName: \"kubernetes.io/projected/ad16ac21-9aee-4776-b4fb-cb51324f625f-kube-api-access-lnl68\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.954682 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad16ac21-9aee-4776-b4fb-cb51324f625f-logs\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.954777 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.954837 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad16ac21-9aee-4776-b4fb-cb51324f625f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.954868 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad16ac21-9aee-4776-b4fb-cb51324f625f-scripts\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.954926 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad16ac21-9aee-4776-b4fb-cb51324f625f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:30 crc kubenswrapper[4713]: I0126 15:57:30.965936 4713 scope.go:117] "RemoveContainer" containerID="f453747697c311ff4228199277fa77815194ea55007f5ee432753d6e678e17e5" Jan 26 15:57:30 crc kubenswrapper[4713]: W0126 15:57:30.999519 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod832abf5e_06a6_4e5f_8d93_0e91eefdb0de.slice/crio-d6f6e1724befc133da7cb1b0fcd3f5655376fabe7a682ec16a6e3bc1dbed7bae WatchSource:0}: Error finding container d6f6e1724befc133da7cb1b0fcd3f5655376fabe7a682ec16a6e3bc1dbed7bae: Status 404 returned error can't find the container with id d6f6e1724befc133da7cb1b0fcd3f5655376fabe7a682ec16a6e3bc1dbed7bae Jan 26 15:57:31 crc kubenswrapper[4713]: W0126 15:57:31.002617 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02d659c0_5c5b_461f_89d0_435b02bd409b.slice/crio-af6de726d4f576bd9ac8ceac4d529cebe81ba31946e3948851c39d7f4b67826c WatchSource:0}: Error finding container af6de726d4f576bd9ac8ceac4d529cebe81ba31946e3948851c39d7f4b67826c: Status 404 returned error can't find the container with id af6de726d4f576bd9ac8ceac4d529cebe81ba31946e3948851c39d7f4b67826c Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.015401 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bxdgl"] Jan 26 15:57:31 crc kubenswrapper[4713]: W0126 15:57:31.027349 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4970397f_f884_40ad_bca5_6c272f27ab4f.slice/crio-763bbfa9e60b9b988bb797a5f30f267a34ddea9e2f56889b6fe765052772f011 WatchSource:0}: Error finding container 763bbfa9e60b9b988bb797a5f30f267a34ddea9e2f56889b6fe765052772f011: Status 404 returned error can't find the container with id 763bbfa9e60b9b988bb797a5f30f267a34ddea9e2f56889b6fe765052772f011 Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.031683 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3e94-account-create-update-qmnfk"] Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.045470 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-19b6-account-create-update-h4s97"] Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.061317 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbckw\" (UniqueName: \"kubernetes.io/projected/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-kube-api-access-qbckw\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.061431 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad16ac21-9aee-4776-b4fb-cb51324f625f-config-data\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.061540 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad16ac21-9aee-4776-b4fb-cb51324f625f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.061578 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnl68\" (UniqueName: \"kubernetes.io/projected/ad16ac21-9aee-4776-b4fb-cb51324f625f-kube-api-access-lnl68\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.061615 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-run-httpd\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.061683 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad16ac21-9aee-4776-b4fb-cb51324f625f-logs\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.061713 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-log-httpd\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.061800 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.061821 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-config-data\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.061848 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.061906 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad16ac21-9aee-4776-b4fb-cb51324f625f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.061937 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.061990 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad16ac21-9aee-4776-b4fb-cb51324f625f-scripts\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.062050 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad16ac21-9aee-4776-b4fb-cb51324f625f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.062141 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-scripts\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.063247 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad16ac21-9aee-4776-b4fb-cb51324f625f-logs\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.063325 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad16ac21-9aee-4776-b4fb-cb51324f625f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.064992 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pdtm9"] Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.066033 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.066063 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/00dc91cd9df64dbe706f69ffd599c2ae7292b0cc0cf466faa03c0fe7216c3630/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.069435 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad16ac21-9aee-4776-b4fb-cb51324f625f-scripts\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.069674 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad16ac21-9aee-4776-b4fb-cb51324f625f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.069916 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad16ac21-9aee-4776-b4fb-cb51324f625f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.070673 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad16ac21-9aee-4776-b4fb-cb51324f625f-config-data\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.080428 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2mmsz"] Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.089473 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnl68\" (UniqueName: \"kubernetes.io/projected/ad16ac21-9aee-4776-b4fb-cb51324f625f-kube-api-access-lnl68\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.164435 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbckw\" (UniqueName: \"kubernetes.io/projected/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-kube-api-access-qbckw\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.165021 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-run-httpd\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.165092 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-log-httpd\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.165165 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-config-data\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.165186 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.165232 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.165322 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-scripts\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.167054 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-run-httpd\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.167599 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-log-httpd\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.171492 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee75bc78-62c3-4a56-b6d6-deef53255913\") pod \"glance-default-external-api-0\" (UID: \"ad16ac21-9aee-4776-b4fb-cb51324f625f\") " pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.173961 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-config-data\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.179291 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.184060 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.192595 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbckw\" (UniqueName: \"kubernetes.io/projected/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-kube-api-access-qbckw\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.193945 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-scripts\") pod \"ceilometer-0\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.210265 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.220635 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.732641 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-19b6-account-create-update-h4s97" event={"ID":"832abf5e-06a6-4e5f-8d93-0e91eefdb0de","Type":"ContainerStarted","Data":"c825c0f3309e6cf9330a0b533d515fb9cc8f8c3f408053b7692eb620a3aa1ead"} Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.732919 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-19b6-account-create-update-h4s97" event={"ID":"832abf5e-06a6-4e5f-8d93-0e91eefdb0de","Type":"ContainerStarted","Data":"d6f6e1724befc133da7cb1b0fcd3f5655376fabe7a682ec16a6e3bc1dbed7bae"} Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.740838 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pdtm9" event={"ID":"4970397f-f884-40ad-bca5-6c272f27ab4f","Type":"ContainerStarted","Data":"6c2f95925bc58c01899034302149f7a0502fbf5d1913f568ba00ef5f70c4f32a"} Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.740873 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pdtm9" event={"ID":"4970397f-f884-40ad-bca5-6c272f27ab4f","Type":"ContainerStarted","Data":"763bbfa9e60b9b988bb797a5f30f267a34ddea9e2f56889b6fe765052772f011"} Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.748072 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bxdgl" event={"ID":"02d659c0-5c5b-461f-89d0-435b02bd409b","Type":"ContainerStarted","Data":"e6634f90c1c979cfac0111338f8a7212c4bc5a30e1489ef98f2f50ba8f364bc4"} Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.748157 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bxdgl" event={"ID":"02d659c0-5c5b-461f-89d0-435b02bd409b","Type":"ContainerStarted","Data":"af6de726d4f576bd9ac8ceac4d529cebe81ba31946e3948851c39d7f4b67826c"} Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.756894 4713 generic.go:334] "Generic (PLEG): container finished" podID="a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd" containerID="b47eb8872d6465f6b4d32e8f33295ef5509e59a6b7251dc02a22b96b3ddae660" exitCode=0 Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.757053 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-57ae-account-create-update-qv9q4" event={"ID":"a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd","Type":"ContainerDied","Data":"b47eb8872d6465f6b4d32e8f33295ef5509e59a6b7251dc02a22b96b3ddae660"} Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.757230 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-57ae-account-create-update-qv9q4" event={"ID":"a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd","Type":"ContainerStarted","Data":"d35b12dbc1ddc184ec678ad07975bb1e5b0a55ab949ecc4830ef3b333779e160"} Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.765187 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-19b6-account-create-update-h4s97" podStartSLOduration=6.765154849 podStartE2EDuration="6.765154849s" podCreationTimestamp="2026-01-26 15:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:31.7534576 +0000 UTC m=+1426.890474835" watchObservedRunningTime="2026-01-26 15:57:31.765154849 +0000 UTC m=+1426.902172084" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.770210 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2mmsz" event={"ID":"dc98b501-5b02-49f5-a1e3-2543e981eab8","Type":"ContainerStarted","Data":"2eb584d2d5d062802061e0c73deb5ce4b51e1cef699c90eaf089cbb92206eaa7"} Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.770480 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2mmsz" event={"ID":"dc98b501-5b02-49f5-a1e3-2543e981eab8","Type":"ContainerStarted","Data":"992f7d8204fa2901d6131d74848f0ec3a82a23f791b97abc6c4aff85f572a643"} Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.773341 4713 generic.go:334] "Generic (PLEG): container finished" podID="e2a94d3c-d963-4279-9c5b-89c52d701d33" containerID="103fba827a5ed7ad227d663582039f70bb92cd84ea04422e9a43b578a40939d1" exitCode=0 Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.773406 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e2a94d3c-d963-4279-9c5b-89c52d701d33","Type":"ContainerDied","Data":"103fba827a5ed7ad227d663582039f70bb92cd84ea04422e9a43b578a40939d1"} Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.774165 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-bxdgl" podStartSLOduration=6.774141151 podStartE2EDuration="6.774141151s" podCreationTimestamp="2026-01-26 15:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:31.766846087 +0000 UTC m=+1426.903863322" watchObservedRunningTime="2026-01-26 15:57:31.774141151 +0000 UTC m=+1426.911158386" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.777727 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3e94-account-create-update-qmnfk" event={"ID":"97311fac-af74-45cb-ad3a-c7a67efaf219","Type":"ContainerStarted","Data":"1d3b8fe62f61d99d10256eb79b6a763af91cc62194b11ed4b3902401600ab3f0"} Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.777814 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3e94-account-create-update-qmnfk" event={"ID":"97311fac-af74-45cb-ad3a-c7a67efaf219","Type":"ContainerStarted","Data":"c929d12b31fc0bb38ea68a90dc0928c29a571a557f554e7be86dda42f68f6213"} Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.793176 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-pdtm9" podStartSLOduration=6.793149266 podStartE2EDuration="6.793149266s" podCreationTimestamp="2026-01-26 15:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:31.782228109 +0000 UTC m=+1426.919245344" watchObservedRunningTime="2026-01-26 15:57:31.793149266 +0000 UTC m=+1426.930166501" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.830355 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-2mmsz" podStartSLOduration=6.8303308099999995 podStartE2EDuration="6.83033081s" podCreationTimestamp="2026-01-26 15:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:31.815289168 +0000 UTC m=+1426.952306403" watchObservedRunningTime="2026-01-26 15:57:31.83033081 +0000 UTC m=+1426.967348045" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.834458 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6" path="/var/lib/kubelet/pods/5c675f5b-7900-4ba5-baf3-7ff64bf3a2c6/volumes" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.836098 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccb1131b-e156-481e-a986-e6231bf9b82c" path="/var/lib/kubelet/pods/ccb1131b-e156-481e-a986-e6231bf9b82c/volumes" Jan 26 15:57:31 crc kubenswrapper[4713]: I0126 15:57:31.872535 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-3e94-account-create-update-qmnfk" podStartSLOduration=6.872507085 podStartE2EDuration="6.872507085s" podCreationTimestamp="2026-01-26 15:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:31.844050915 +0000 UTC m=+1426.981068150" watchObservedRunningTime="2026-01-26 15:57:31.872507085 +0000 UTC m=+1427.009524330" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.162463 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.284405 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.414040 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.510165 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr2rf\" (UniqueName: \"kubernetes.io/projected/e2a94d3c-d963-4279-9c5b-89c52d701d33-kube-api-access-nr2rf\") pod \"e2a94d3c-d963-4279-9c5b-89c52d701d33\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.510594 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-scripts\") pod \"e2a94d3c-d963-4279-9c5b-89c52d701d33\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.510778 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") pod \"e2a94d3c-d963-4279-9c5b-89c52d701d33\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.510866 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-internal-tls-certs\") pod \"e2a94d3c-d963-4279-9c5b-89c52d701d33\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.510956 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-config-data\") pod \"e2a94d3c-d963-4279-9c5b-89c52d701d33\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.511008 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2a94d3c-d963-4279-9c5b-89c52d701d33-httpd-run\") pod \"e2a94d3c-d963-4279-9c5b-89c52d701d33\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.511050 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-combined-ca-bundle\") pod \"e2a94d3c-d963-4279-9c5b-89c52d701d33\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.511211 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a94d3c-d963-4279-9c5b-89c52d701d33-logs\") pod \"e2a94d3c-d963-4279-9c5b-89c52d701d33\" (UID: \"e2a94d3c-d963-4279-9c5b-89c52d701d33\") " Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.513012 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a94d3c-d963-4279-9c5b-89c52d701d33-logs" (OuterVolumeSpecName: "logs") pod "e2a94d3c-d963-4279-9c5b-89c52d701d33" (UID: "e2a94d3c-d963-4279-9c5b-89c52d701d33"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.517953 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a94d3c-d963-4279-9c5b-89c52d701d33-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e2a94d3c-d963-4279-9c5b-89c52d701d33" (UID: "e2a94d3c-d963-4279-9c5b-89c52d701d33"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.523256 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a94d3c-d963-4279-9c5b-89c52d701d33-kube-api-access-nr2rf" (OuterVolumeSpecName: "kube-api-access-nr2rf") pod "e2a94d3c-d963-4279-9c5b-89c52d701d33" (UID: "e2a94d3c-d963-4279-9c5b-89c52d701d33"). InnerVolumeSpecName "kube-api-access-nr2rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.523754 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-scripts" (OuterVolumeSpecName: "scripts") pod "e2a94d3c-d963-4279-9c5b-89c52d701d33" (UID: "e2a94d3c-d963-4279-9c5b-89c52d701d33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.546081 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc" (OuterVolumeSpecName: "glance") pod "e2a94d3c-d963-4279-9c5b-89c52d701d33" (UID: "e2a94d3c-d963-4279-9c5b-89c52d701d33"). InnerVolumeSpecName "pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.558851 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2a94d3c-d963-4279-9c5b-89c52d701d33" (UID: "e2a94d3c-d963-4279-9c5b-89c52d701d33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.587617 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e2a94d3c-d963-4279-9c5b-89c52d701d33" (UID: "e2a94d3c-d963-4279-9c5b-89c52d701d33"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.614754 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.614804 4713 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a94d3c-d963-4279-9c5b-89c52d701d33-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.614817 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr2rf\" (UniqueName: \"kubernetes.io/projected/e2a94d3c-d963-4279-9c5b-89c52d701d33-kube-api-access-nr2rf\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.614833 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.614870 4713 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") on node \"crc\" " Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.614886 4713 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.614899 4713 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2a94d3c-d963-4279-9c5b-89c52d701d33-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.616178 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-config-data" (OuterVolumeSpecName: "config-data") pod "e2a94d3c-d963-4279-9c5b-89c52d701d33" (UID: "e2a94d3c-d963-4279-9c5b-89c52d701d33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.645952 4713 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.646125 4713 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc") on node "crc" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.756244 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a94d3c-d963-4279-9c5b-89c52d701d33-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.756296 4713 reconciler_common.go:293] "Volume detached for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.790546 4713 generic.go:334] "Generic (PLEG): container finished" podID="02d659c0-5c5b-461f-89d0-435b02bd409b" containerID="e6634f90c1c979cfac0111338f8a7212c4bc5a30e1489ef98f2f50ba8f364bc4" exitCode=0 Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.790822 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bxdgl" event={"ID":"02d659c0-5c5b-461f-89d0-435b02bd409b","Type":"ContainerDied","Data":"e6634f90c1c979cfac0111338f8a7212c4bc5a30e1489ef98f2f50ba8f364bc4"} Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.793281 4713 generic.go:334] "Generic (PLEG): container finished" podID="97311fac-af74-45cb-ad3a-c7a67efaf219" containerID="1d3b8fe62f61d99d10256eb79b6a763af91cc62194b11ed4b3902401600ab3f0" exitCode=0 Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.793353 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3e94-account-create-update-qmnfk" event={"ID":"97311fac-af74-45cb-ad3a-c7a67efaf219","Type":"ContainerDied","Data":"1d3b8fe62f61d99d10256eb79b6a763af91cc62194b11ed4b3902401600ab3f0"} Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.795321 4713 generic.go:334] "Generic (PLEG): container finished" podID="dc98b501-5b02-49f5-a1e3-2543e981eab8" containerID="2eb584d2d5d062802061e0c73deb5ce4b51e1cef699c90eaf089cbb92206eaa7" exitCode=0 Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.795390 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2mmsz" event={"ID":"dc98b501-5b02-49f5-a1e3-2543e981eab8","Type":"ContainerDied","Data":"2eb584d2d5d062802061e0c73deb5ce4b51e1cef699c90eaf089cbb92206eaa7"} Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.799583 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9","Type":"ContainerStarted","Data":"0c5e2988e6c643824836091185cbb750fa9300a13ba50886daa9af8e82f259f8"} Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.802275 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ad16ac21-9aee-4776-b4fb-cb51324f625f","Type":"ContainerStarted","Data":"a088b5f300eb468da3c97ef684d1d4443ce5f5db879c4958b6338bc8a355ee77"} Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.805720 4713 generic.go:334] "Generic (PLEG): container finished" podID="832abf5e-06a6-4e5f-8d93-0e91eefdb0de" containerID="c825c0f3309e6cf9330a0b533d515fb9cc8f8c3f408053b7692eb620a3aa1ead" exitCode=0 Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.805843 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-19b6-account-create-update-h4s97" event={"ID":"832abf5e-06a6-4e5f-8d93-0e91eefdb0de","Type":"ContainerDied","Data":"c825c0f3309e6cf9330a0b533d515fb9cc8f8c3f408053b7692eb620a3aa1ead"} Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.821028 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.821020 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e2a94d3c-d963-4279-9c5b-89c52d701d33","Type":"ContainerDied","Data":"3783ef922fcba86d151b4c56fc4c3a470dadaae249a12d2c031f942c5b426401"} Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.822624 4713 scope.go:117] "RemoveContainer" containerID="103fba827a5ed7ad227d663582039f70bb92cd84ea04422e9a43b578a40939d1" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.824896 4713 generic.go:334] "Generic (PLEG): container finished" podID="4970397f-f884-40ad-bca5-6c272f27ab4f" containerID="6c2f95925bc58c01899034302149f7a0502fbf5d1913f568ba00ef5f70c4f32a" exitCode=0 Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.825150 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pdtm9" event={"ID":"4970397f-f884-40ad-bca5-6c272f27ab4f","Type":"ContainerDied","Data":"6c2f95925bc58c01899034302149f7a0502fbf5d1913f568ba00ef5f70c4f32a"} Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.877599 4713 scope.go:117] "RemoveContainer" containerID="9d32d94847475c92e6bdec901361f495d5bad467dc21c729a70bdbfa23738e24" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.898214 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.913351 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.967035 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:57:32 crc kubenswrapper[4713]: E0126 15:57:32.969751 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a94d3c-d963-4279-9c5b-89c52d701d33" containerName="glance-log" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.969770 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a94d3c-d963-4279-9c5b-89c52d701d33" containerName="glance-log" Jan 26 15:57:32 crc kubenswrapper[4713]: E0126 15:57:32.969786 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a94d3c-d963-4279-9c5b-89c52d701d33" containerName="glance-httpd" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.969793 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a94d3c-d963-4279-9c5b-89c52d701d33" containerName="glance-httpd" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.970050 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a94d3c-d963-4279-9c5b-89c52d701d33" containerName="glance-log" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.970070 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a94d3c-d963-4279-9c5b-89c52d701d33" containerName="glance-httpd" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.971384 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.976517 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 15:57:32 crc kubenswrapper[4713]: I0126 15:57:32.976710 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.000568 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.070449 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a8d06f-05be-44a6-82c7-f61788570aad-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.070499 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7ncd\" (UniqueName: \"kubernetes.io/projected/21a8d06f-05be-44a6-82c7-f61788570aad-kube-api-access-f7ncd\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.070528 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.070553 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a8d06f-05be-44a6-82c7-f61788570aad-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.070617 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21a8d06f-05be-44a6-82c7-f61788570aad-logs\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.070656 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a8d06f-05be-44a6-82c7-f61788570aad-config-data\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.070695 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21a8d06f-05be-44a6-82c7-f61788570aad-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.070732 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21a8d06f-05be-44a6-82c7-f61788570aad-scripts\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.172291 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a8d06f-05be-44a6-82c7-f61788570aad-config-data\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.172676 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21a8d06f-05be-44a6-82c7-f61788570aad-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.172827 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21a8d06f-05be-44a6-82c7-f61788570aad-scripts\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.172885 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a8d06f-05be-44a6-82c7-f61788570aad-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.172921 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7ncd\" (UniqueName: \"kubernetes.io/projected/21a8d06f-05be-44a6-82c7-f61788570aad-kube-api-access-f7ncd\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.172957 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.172985 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a8d06f-05be-44a6-82c7-f61788570aad-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.173054 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21a8d06f-05be-44a6-82c7-f61788570aad-logs\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.173614 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21a8d06f-05be-44a6-82c7-f61788570aad-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.179128 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.179172 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e4dc69d14e8e0f9ca8a772d269bb39f8a91314d344a69118d1458dfeb18a9550/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.182710 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21a8d06f-05be-44a6-82c7-f61788570aad-logs\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.184214 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a8d06f-05be-44a6-82c7-f61788570aad-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.184348 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a8d06f-05be-44a6-82c7-f61788570aad-config-data\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.186488 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a8d06f-05be-44a6-82c7-f61788570aad-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.191013 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21a8d06f-05be-44a6-82c7-f61788570aad-scripts\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.195222 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7ncd\" (UniqueName: \"kubernetes.io/projected/21a8d06f-05be-44a6-82c7-f61788570aad-kube-api-access-f7ncd\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.295382 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c356d003-1702-421c-aa98-6a9bf2bbd2dc\") pod \"glance-default-internal-api-0\" (UID: \"21a8d06f-05be-44a6-82c7-f61788570aad\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.304159 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.304246 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.304307 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.305203 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42ffb45851c67f85ba43b543b337fa54564e1c75cb03fd91b387c5b7e98ba8b2"} pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.305263 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" containerID="cri-o://42ffb45851c67f85ba43b543b337fa54564e1c75cb03fd91b387c5b7e98ba8b2" gracePeriod=600 Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.334298 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.385065 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-57ae-account-create-update-qv9q4" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.479609 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd-operator-scripts\") pod \"a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd\" (UID: \"a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd\") " Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.480081 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57g4x\" (UniqueName: \"kubernetes.io/projected/a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd-kube-api-access-57g4x\") pod \"a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd\" (UID: \"a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd\") " Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.480484 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd" (UID: "a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.481172 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.487507 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd-kube-api-access-57g4x" (OuterVolumeSpecName: "kube-api-access-57g4x") pod "a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd" (UID: "a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd"). InnerVolumeSpecName "kube-api-access-57g4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.585089 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57g4x\" (UniqueName: \"kubernetes.io/projected/a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd-kube-api-access-57g4x\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.665511 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.866265 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a94d3c-d963-4279-9c5b-89c52d701d33" path="/var/lib/kubelet/pods/e2a94d3c-d963-4279-9c5b-89c52d701d33/volumes" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.876513 4713 generic.go:334] "Generic (PLEG): container finished" podID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerID="42ffb45851c67f85ba43b543b337fa54564e1c75cb03fd91b387c5b7e98ba8b2" exitCode=0 Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.876644 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerDied","Data":"42ffb45851c67f85ba43b543b337fa54564e1c75cb03fd91b387c5b7e98ba8b2"} Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.876689 4713 scope.go:117] "RemoveContainer" containerID="90772569024cad074f2b7eff5e4a439736928d25bdd915e9b6f3f6c1f8edbe62" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.891596 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-57ae-account-create-update-qv9q4" event={"ID":"a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd","Type":"ContainerDied","Data":"d35b12dbc1ddc184ec678ad07975bb1e5b0a55ab949ecc4830ef3b333779e160"} Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.891653 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d35b12dbc1ddc184ec678ad07975bb1e5b0a55ab949ecc4830ef3b333779e160" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.891745 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-57ae-account-create-update-qv9q4" Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.898009 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9","Type":"ContainerStarted","Data":"07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8"} Jan 26 15:57:33 crc kubenswrapper[4713]: I0126 15:57:33.899924 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ad16ac21-9aee-4776-b4fb-cb51324f625f","Type":"ContainerStarted","Data":"5c8a4f42adcebb94d87d817a032837675e6f373d16a206ba0878e1609facba4e"} Jan 26 15:57:34 crc kubenswrapper[4713]: I0126 15:57:34.052244 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:57:34 crc kubenswrapper[4713]: I0126 15:57:34.501109 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pdtm9" Jan 26 15:57:34 crc kubenswrapper[4713]: I0126 15:57:34.532518 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4970397f-f884-40ad-bca5-6c272f27ab4f-operator-scripts\") pod \"4970397f-f884-40ad-bca5-6c272f27ab4f\" (UID: \"4970397f-f884-40ad-bca5-6c272f27ab4f\") " Jan 26 15:57:34 crc kubenswrapper[4713]: I0126 15:57:34.532703 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85hxt\" (UniqueName: \"kubernetes.io/projected/4970397f-f884-40ad-bca5-6c272f27ab4f-kube-api-access-85hxt\") pod \"4970397f-f884-40ad-bca5-6c272f27ab4f\" (UID: \"4970397f-f884-40ad-bca5-6c272f27ab4f\") " Jan 26 15:57:34 crc kubenswrapper[4713]: I0126 15:57:34.533861 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4970397f-f884-40ad-bca5-6c272f27ab4f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4970397f-f884-40ad-bca5-6c272f27ab4f" (UID: "4970397f-f884-40ad-bca5-6c272f27ab4f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:34 crc kubenswrapper[4713]: I0126 15:57:34.539948 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4970397f-f884-40ad-bca5-6c272f27ab4f-kube-api-access-85hxt" (OuterVolumeSpecName: "kube-api-access-85hxt") pod "4970397f-f884-40ad-bca5-6c272f27ab4f" (UID: "4970397f-f884-40ad-bca5-6c272f27ab4f"). InnerVolumeSpecName "kube-api-access-85hxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:34 crc kubenswrapper[4713]: I0126 15:57:34.634917 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4970397f-f884-40ad-bca5-6c272f27ab4f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:34 crc kubenswrapper[4713]: I0126 15:57:34.634972 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85hxt\" (UniqueName: \"kubernetes.io/projected/4970397f-f884-40ad-bca5-6c272f27ab4f-kube-api-access-85hxt\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:34 crc kubenswrapper[4713]: I0126 15:57:34.956192 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bxdgl" event={"ID":"02d659c0-5c5b-461f-89d0-435b02bd409b","Type":"ContainerDied","Data":"af6de726d4f576bd9ac8ceac4d529cebe81ba31946e3948851c39d7f4b67826c"} Jan 26 15:57:34 crc kubenswrapper[4713]: I0126 15:57:34.956263 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af6de726d4f576bd9ac8ceac4d529cebe81ba31946e3948851c39d7f4b67826c" Jan 26 15:57:34 crc kubenswrapper[4713]: I0126 15:57:34.977114 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21a8d06f-05be-44a6-82c7-f61788570aad","Type":"ContainerStarted","Data":"651afbacfca8764ae1751c9bdd6e2d9bc402ae6769e4f8632d3d1c80593d77ed"} Jan 26 15:57:34 crc kubenswrapper[4713]: I0126 15:57:34.998617 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9","Type":"ContainerStarted","Data":"6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c"} Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.022292 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ad16ac21-9aee-4776-b4fb-cb51324f625f","Type":"ContainerStarted","Data":"33d07fca518918aaf4a9ff649a90ab4a9f5eb7b7ca09cad766b0c9d2d07ae370"} Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.093607 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.093585133 podStartE2EDuration="5.093585133s" podCreationTimestamp="2026-01-26 15:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:35.08100684 +0000 UTC m=+1430.218024075" watchObservedRunningTime="2026-01-26 15:57:35.093585133 +0000 UTC m=+1430.230602368" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.097780 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39"} Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.111991 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pdtm9" event={"ID":"4970397f-f884-40ad-bca5-6c272f27ab4f","Type":"ContainerDied","Data":"763bbfa9e60b9b988bb797a5f30f267a34ddea9e2f56889b6fe765052772f011"} Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.112252 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="763bbfa9e60b9b988bb797a5f30f267a34ddea9e2f56889b6fe765052772f011" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.112415 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pdtm9" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.200570 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bxdgl" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.208174 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-19b6-account-create-update-h4s97" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.218443 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3e94-account-create-update-qmnfk" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.265654 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/832abf5e-06a6-4e5f-8d93-0e91eefdb0de-operator-scripts\") pod \"832abf5e-06a6-4e5f-8d93-0e91eefdb0de\" (UID: \"832abf5e-06a6-4e5f-8d93-0e91eefdb0de\") " Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.265767 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgsdp\" (UniqueName: \"kubernetes.io/projected/97311fac-af74-45cb-ad3a-c7a67efaf219-kube-api-access-dgsdp\") pod \"97311fac-af74-45cb-ad3a-c7a67efaf219\" (UID: \"97311fac-af74-45cb-ad3a-c7a67efaf219\") " Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.265796 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97311fac-af74-45cb-ad3a-c7a67efaf219-operator-scripts\") pod \"97311fac-af74-45cb-ad3a-c7a67efaf219\" (UID: \"97311fac-af74-45cb-ad3a-c7a67efaf219\") " Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.265831 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56n2b\" (UniqueName: \"kubernetes.io/projected/02d659c0-5c5b-461f-89d0-435b02bd409b-kube-api-access-56n2b\") pod \"02d659c0-5c5b-461f-89d0-435b02bd409b\" (UID: \"02d659c0-5c5b-461f-89d0-435b02bd409b\") " Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.265965 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d659c0-5c5b-461f-89d0-435b02bd409b-operator-scripts\") pod \"02d659c0-5c5b-461f-89d0-435b02bd409b\" (UID: \"02d659c0-5c5b-461f-89d0-435b02bd409b\") " Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.266070 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r8ck\" (UniqueName: \"kubernetes.io/projected/832abf5e-06a6-4e5f-8d93-0e91eefdb0de-kube-api-access-7r8ck\") pod \"832abf5e-06a6-4e5f-8d93-0e91eefdb0de\" (UID: \"832abf5e-06a6-4e5f-8d93-0e91eefdb0de\") " Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.266719 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/832abf5e-06a6-4e5f-8d93-0e91eefdb0de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "832abf5e-06a6-4e5f-8d93-0e91eefdb0de" (UID: "832abf5e-06a6-4e5f-8d93-0e91eefdb0de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.267018 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/832abf5e-06a6-4e5f-8d93-0e91eefdb0de-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.268012 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02d659c0-5c5b-461f-89d0-435b02bd409b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02d659c0-5c5b-461f-89d0-435b02bd409b" (UID: "02d659c0-5c5b-461f-89d0-435b02bd409b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.272125 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97311fac-af74-45cb-ad3a-c7a67efaf219-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97311fac-af74-45cb-ad3a-c7a67efaf219" (UID: "97311fac-af74-45cb-ad3a-c7a67efaf219"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.279963 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2mmsz" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.285782 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d659c0-5c5b-461f-89d0-435b02bd409b-kube-api-access-56n2b" (OuterVolumeSpecName: "kube-api-access-56n2b") pod "02d659c0-5c5b-461f-89d0-435b02bd409b" (UID: "02d659c0-5c5b-461f-89d0-435b02bd409b"). InnerVolumeSpecName "kube-api-access-56n2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.285929 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/832abf5e-06a6-4e5f-8d93-0e91eefdb0de-kube-api-access-7r8ck" (OuterVolumeSpecName: "kube-api-access-7r8ck") pod "832abf5e-06a6-4e5f-8d93-0e91eefdb0de" (UID: "832abf5e-06a6-4e5f-8d93-0e91eefdb0de"). InnerVolumeSpecName "kube-api-access-7r8ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.286512 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97311fac-af74-45cb-ad3a-c7a67efaf219-kube-api-access-dgsdp" (OuterVolumeSpecName: "kube-api-access-dgsdp") pod "97311fac-af74-45cb-ad3a-c7a67efaf219" (UID: "97311fac-af74-45cb-ad3a-c7a67efaf219"). InnerVolumeSpecName "kube-api-access-dgsdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.368586 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vh94\" (UniqueName: \"kubernetes.io/projected/dc98b501-5b02-49f5-a1e3-2543e981eab8-kube-api-access-2vh94\") pod \"dc98b501-5b02-49f5-a1e3-2543e981eab8\" (UID: \"dc98b501-5b02-49f5-a1e3-2543e981eab8\") " Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.368880 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc98b501-5b02-49f5-a1e3-2543e981eab8-operator-scripts\") pod \"dc98b501-5b02-49f5-a1e3-2543e981eab8\" (UID: \"dc98b501-5b02-49f5-a1e3-2543e981eab8\") " Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.369510 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d659c0-5c5b-461f-89d0-435b02bd409b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.369510 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc98b501-5b02-49f5-a1e3-2543e981eab8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc98b501-5b02-49f5-a1e3-2543e981eab8" (UID: "dc98b501-5b02-49f5-a1e3-2543e981eab8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.369536 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r8ck\" (UniqueName: \"kubernetes.io/projected/832abf5e-06a6-4e5f-8d93-0e91eefdb0de-kube-api-access-7r8ck\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.369598 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgsdp\" (UniqueName: \"kubernetes.io/projected/97311fac-af74-45cb-ad3a-c7a67efaf219-kube-api-access-dgsdp\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.369614 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97311fac-af74-45cb-ad3a-c7a67efaf219-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.369668 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56n2b\" (UniqueName: \"kubernetes.io/projected/02d659c0-5c5b-461f-89d0-435b02bd409b-kube-api-access-56n2b\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.372725 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc98b501-5b02-49f5-a1e3-2543e981eab8-kube-api-access-2vh94" (OuterVolumeSpecName: "kube-api-access-2vh94") pod "dc98b501-5b02-49f5-a1e3-2543e981eab8" (UID: "dc98b501-5b02-49f5-a1e3-2543e981eab8"). InnerVolumeSpecName "kube-api-access-2vh94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.472078 4713 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc98b501-5b02-49f5-a1e3-2543e981eab8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:35 crc kubenswrapper[4713]: I0126 15:57:35.472413 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vh94\" (UniqueName: \"kubernetes.io/projected/dc98b501-5b02-49f5-a1e3-2543e981eab8-kube-api-access-2vh94\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.143396 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21a8d06f-05be-44a6-82c7-f61788570aad","Type":"ContainerStarted","Data":"2944af4030bb0c4bd6f2ce37b1711388aeeb2bf240d0635299eb151b324e1788"} Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.154354 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3e94-account-create-update-qmnfk" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.155126 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3e94-account-create-update-qmnfk" event={"ID":"97311fac-af74-45cb-ad3a-c7a67efaf219","Type":"ContainerDied","Data":"c929d12b31fc0bb38ea68a90dc0928c29a571a557f554e7be86dda42f68f6213"} Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.155157 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c929d12b31fc0bb38ea68a90dc0928c29a571a557f554e7be86dda42f68f6213" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.165874 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2mmsz" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.165869 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2mmsz" event={"ID":"dc98b501-5b02-49f5-a1e3-2543e981eab8","Type":"ContainerDied","Data":"992f7d8204fa2901d6131d74848f0ec3a82a23f791b97abc6c4aff85f572a643"} Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.166026 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="992f7d8204fa2901d6131d74848f0ec3a82a23f791b97abc6c4aff85f572a643" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.180855 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9","Type":"ContainerStarted","Data":"c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6"} Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.188263 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-19b6-account-create-update-h4s97" event={"ID":"832abf5e-06a6-4e5f-8d93-0e91eefdb0de","Type":"ContainerDied","Data":"d6f6e1724befc133da7cb1b0fcd3f5655376fabe7a682ec16a6e3bc1dbed7bae"} Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.188309 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6f6e1724befc133da7cb1b0fcd3f5655376fabe7a682ec16a6e3bc1dbed7bae" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.188408 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-19b6-account-create-update-h4s97" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.194406 4713 generic.go:334] "Generic (PLEG): container finished" podID="e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" containerID="7c930878284e055cb18e895a81de72fa3a3e28db807f621e9841129b8b204561" exitCode=137 Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.194543 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bxdgl" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.195093 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2","Type":"ContainerDied","Data":"7c930878284e055cb18e895a81de72fa3a3e28db807f621e9841129b8b204561"} Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.195166 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2","Type":"ContainerDied","Data":"cf230e77093a361ad02790fac2bf4da58219f2886d48c4655491bce233d4386e"} Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.195192 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf230e77093a361ad02790fac2bf4da58219f2886d48c4655491bce233d4386e" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.215467 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.434575 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-combined-ca-bundle\") pod \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.434854 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-logs\") pod \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.434928 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-certs\") pod \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.434980 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-scripts\") pod \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.435048 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-config-data\") pod \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.435085 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt4m5\" (UniqueName: \"kubernetes.io/projected/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-kube-api-access-bt4m5\") pod \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.435267 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-config-data-custom\") pod \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\" (UID: \"e2949cfe-9664-49c5-8d4b-53f4b57bb8b2\") " Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.442464 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-logs" (OuterVolumeSpecName: "logs") pod "e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" (UID: "e2949cfe-9664-49c5-8d4b-53f4b57bb8b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.448634 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-certs" (OuterVolumeSpecName: "certs") pod "e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" (UID: "e2949cfe-9664-49c5-8d4b-53f4b57bb8b2"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.450485 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-scripts" (OuterVolumeSpecName: "scripts") pod "e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" (UID: "e2949cfe-9664-49c5-8d4b-53f4b57bb8b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.450558 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" (UID: "e2949cfe-9664-49c5-8d4b-53f4b57bb8b2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.476661 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-kube-api-access-bt4m5" (OuterVolumeSpecName: "kube-api-access-bt4m5") pod "e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" (UID: "e2949cfe-9664-49c5-8d4b-53f4b57bb8b2"). InnerVolumeSpecName "kube-api-access-bt4m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.511306 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" (UID: "e2949cfe-9664-49c5-8d4b-53f4b57bb8b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.511758 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-config-data" (OuterVolumeSpecName: "config-data") pod "e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" (UID: "e2949cfe-9664-49c5-8d4b-53f4b57bb8b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.538817 4713 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.539115 4713 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.539209 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.539314 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.539455 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt4m5\" (UniqueName: \"kubernetes.io/projected/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-kube-api-access-bt4m5\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.539562 4713 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:36 crc kubenswrapper[4713]: I0126 15:57:36.539651 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.212423 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21a8d06f-05be-44a6-82c7-f61788570aad","Type":"ContainerStarted","Data":"85324619c56d46d271eedf2f27f3a0784d7a4281a15f66d82ab8d92e1097e30c"} Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.212513 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.253943 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.253897172 podStartE2EDuration="5.253897172s" podCreationTimestamp="2026-01-26 15:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:37.228088927 +0000 UTC m=+1432.365106162" watchObservedRunningTime="2026-01-26 15:57:37.253897172 +0000 UTC m=+1432.390914407" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.420756 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.441093 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.459000 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Jan 26 15:57:37 crc kubenswrapper[4713]: E0126 15:57:37.460079 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97311fac-af74-45cb-ad3a-c7a67efaf219" containerName="mariadb-account-create-update" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.460208 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="97311fac-af74-45cb-ad3a-c7a67efaf219" containerName="mariadb-account-create-update" Jan 26 15:57:37 crc kubenswrapper[4713]: E0126 15:57:37.460296 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832abf5e-06a6-4e5f-8d93-0e91eefdb0de" containerName="mariadb-account-create-update" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.460380 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="832abf5e-06a6-4e5f-8d93-0e91eefdb0de" containerName="mariadb-account-create-update" Jan 26 15:57:37 crc kubenswrapper[4713]: E0126 15:57:37.460457 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd" containerName="mariadb-account-create-update" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.460536 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd" containerName="mariadb-account-create-update" Jan 26 15:57:37 crc kubenswrapper[4713]: E0126 15:57:37.460635 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d659c0-5c5b-461f-89d0-435b02bd409b" containerName="mariadb-database-create" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.460707 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d659c0-5c5b-461f-89d0-435b02bd409b" containerName="mariadb-database-create" Jan 26 15:57:37 crc kubenswrapper[4713]: E0126 15:57:37.460786 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4970397f-f884-40ad-bca5-6c272f27ab4f" containerName="mariadb-database-create" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.460872 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4970397f-f884-40ad-bca5-6c272f27ab4f" containerName="mariadb-database-create" Jan 26 15:57:37 crc kubenswrapper[4713]: E0126 15:57:37.460956 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" containerName="cloudkitty-api" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.461021 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" containerName="cloudkitty-api" Jan 26 15:57:37 crc kubenswrapper[4713]: E0126 15:57:37.461100 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" containerName="cloudkitty-api-log" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.461171 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" containerName="cloudkitty-api-log" Jan 26 15:57:37 crc kubenswrapper[4713]: E0126 15:57:37.461250 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc98b501-5b02-49f5-a1e3-2543e981eab8" containerName="mariadb-database-create" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.461313 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc98b501-5b02-49f5-a1e3-2543e981eab8" containerName="mariadb-database-create" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.461691 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="97311fac-af74-45cb-ad3a-c7a67efaf219" containerName="mariadb-account-create-update" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.461783 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" containerName="cloudkitty-api" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.461859 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd" containerName="mariadb-account-create-update" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.461939 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d659c0-5c5b-461f-89d0-435b02bd409b" containerName="mariadb-database-create" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.462024 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc98b501-5b02-49f5-a1e3-2543e981eab8" containerName="mariadb-database-create" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.462100 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" containerName="cloudkitty-api-log" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.462177 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="832abf5e-06a6-4e5f-8d93-0e91eefdb0de" containerName="mariadb-account-create-update" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.462258 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4970397f-f884-40ad-bca5-6c272f27ab4f" containerName="mariadb-database-create" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.464005 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.466509 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.467576 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.471103 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.479228 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.565562 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpgcf\" (UniqueName: \"kubernetes.io/projected/e2d47268-3c4f-48cf-a362-b81aa7265dd4-kube-api-access-wpgcf\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.565627 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.565662 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.565681 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e2d47268-3c4f-48cf-a362-b81aa7265dd4-certs\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.566182 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d47268-3c4f-48cf-a362-b81aa7265dd4-logs\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.566282 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.566574 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.566664 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-config-data\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.566899 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-scripts\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.668467 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.668531 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.668555 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e2d47268-3c4f-48cf-a362-b81aa7265dd4-certs\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.668661 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d47268-3c4f-48cf-a362-b81aa7265dd4-logs\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.668694 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.668741 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.668770 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-config-data\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.668848 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-scripts\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.668899 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpgcf\" (UniqueName: \"kubernetes.io/projected/e2d47268-3c4f-48cf-a362-b81aa7265dd4-kube-api-access-wpgcf\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.669646 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d47268-3c4f-48cf-a362-b81aa7265dd4-logs\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.675431 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.675445 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.675555 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.675793 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-config-data\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.675924 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.676008 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-scripts\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.676429 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e2d47268-3c4f-48cf-a362-b81aa7265dd4-certs\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.690845 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpgcf\" (UniqueName: \"kubernetes.io/projected/e2d47268-3c4f-48cf-a362-b81aa7265dd4-kube-api-access-wpgcf\") pod \"cloudkitty-api-0\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.795063 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 26 15:57:37 crc kubenswrapper[4713]: I0126 15:57:37.818765 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" path="/var/lib/kubelet/pods/e2949cfe-9664-49c5-8d4b-53f4b57bb8b2/volumes" Jan 26 15:57:38 crc kubenswrapper[4713]: I0126 15:57:38.229276 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="ceilometer-central-agent" containerID="cri-o://07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8" gracePeriod=30 Jan 26 15:57:38 crc kubenswrapper[4713]: I0126 15:57:38.230982 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9","Type":"ContainerStarted","Data":"1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990"} Jan 26 15:57:38 crc kubenswrapper[4713]: I0126 15:57:38.231159 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:57:38 crc kubenswrapper[4713]: I0126 15:57:38.234006 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="proxy-httpd" containerID="cri-o://1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990" gracePeriod=30 Jan 26 15:57:38 crc kubenswrapper[4713]: I0126 15:57:38.234312 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="sg-core" containerID="cri-o://c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6" gracePeriod=30 Jan 26 15:57:38 crc kubenswrapper[4713]: I0126 15:57:38.235246 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="ceilometer-notification-agent" containerID="cri-o://6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c" gracePeriod=30 Jan 26 15:57:38 crc kubenswrapper[4713]: I0126 15:57:38.263081 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.363888932 podStartE2EDuration="8.263059732s" podCreationTimestamp="2026-01-26 15:57:30 +0000 UTC" firstStartedPulling="2026-01-26 15:57:32.303829132 +0000 UTC m=+1427.440846377" lastFinishedPulling="2026-01-26 15:57:37.202999942 +0000 UTC m=+1432.340017177" observedRunningTime="2026-01-26 15:57:38.254410499 +0000 UTC m=+1433.391427734" watchObservedRunningTime="2026-01-26 15:57:38.263059732 +0000 UTC m=+1433.400076967" Jan 26 15:57:38 crc kubenswrapper[4713]: I0126 15:57:38.341448 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.141788 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.200964 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-scripts\") pod \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.201059 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-sg-core-conf-yaml\") pod \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.201112 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-combined-ca-bundle\") pod \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.201237 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-run-httpd\") pod \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.201402 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbckw\" (UniqueName: \"kubernetes.io/projected/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-kube-api-access-qbckw\") pod \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.201480 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-log-httpd\") pod \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.201511 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-config-data\") pod \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\" (UID: \"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9\") " Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.206271 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" (UID: "6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.207885 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" (UID: "6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.208186 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-kube-api-access-qbckw" (OuterVolumeSpecName: "kube-api-access-qbckw") pod "6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" (UID: "6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9"). InnerVolumeSpecName "kube-api-access-qbckw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.221626 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-scripts" (OuterVolumeSpecName: "scripts") pod "6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" (UID: "6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.245002 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" (UID: "6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.255613 4713 generic.go:334] "Generic (PLEG): container finished" podID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerID="1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990" exitCode=0 Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.255652 4713 generic.go:334] "Generic (PLEG): container finished" podID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerID="c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6" exitCode=2 Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.255662 4713 generic.go:334] "Generic (PLEG): container finished" podID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerID="6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c" exitCode=0 Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.255672 4713 generic.go:334] "Generic (PLEG): container finished" podID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerID="07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8" exitCode=0 Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.255827 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9","Type":"ContainerDied","Data":"1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990"} Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.255864 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9","Type":"ContainerDied","Data":"c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6"} Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.255903 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9","Type":"ContainerDied","Data":"6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c"} Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.255917 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9","Type":"ContainerDied","Data":"07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8"} Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.255929 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9","Type":"ContainerDied","Data":"0c5e2988e6c643824836091185cbb750fa9300a13ba50886daa9af8e82f259f8"} Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.255947 4713 scope.go:117] "RemoveContainer" containerID="1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.256174 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.273638 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"e2d47268-3c4f-48cf-a362-b81aa7265dd4","Type":"ContainerStarted","Data":"c6332888dbc937ce6b34d27bac5925e52c393b01886bc5a07096059b6466dafb"} Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.273699 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"e2d47268-3c4f-48cf-a362-b81aa7265dd4","Type":"ContainerStarted","Data":"7d1470d8ef812734500f8bca27439c2a46045fec8e804e56347f9c96dacd4de1"} Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.273713 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"e2d47268-3c4f-48cf-a362-b81aa7265dd4","Type":"ContainerStarted","Data":"110055bd0f24be8661c1183c7a9c7fa9aa0432ab0e8ea1a13a1260ce59c01c1e"} Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.273880 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.304532 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbckw\" (UniqueName: \"kubernetes.io/projected/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-kube-api-access-qbckw\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.304573 4713 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.304585 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.304597 4713 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.304608 4713 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.325448 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" (UID: "6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.327532 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.327512866 podStartE2EDuration="2.327512866s" podCreationTimestamp="2026-01-26 15:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:39.320040196 +0000 UTC m=+1434.457057441" watchObservedRunningTime="2026-01-26 15:57:39.327512866 +0000 UTC m=+1434.464530101" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.353829 4713 scope.go:117] "RemoveContainer" containerID="c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.363278 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-config-data" (OuterVolumeSpecName: "config-data") pod "6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" (UID: "6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.379114 4713 scope.go:117] "RemoveContainer" containerID="6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.400291 4713 scope.go:117] "RemoveContainer" containerID="07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.407186 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.408178 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.434776 4713 scope.go:117] "RemoveContainer" containerID="1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990" Jan 26 15:57:39 crc kubenswrapper[4713]: E0126 15:57:39.436799 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990\": container with ID starting with 1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990 not found: ID does not exist" containerID="1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.436854 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990"} err="failed to get container status \"1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990\": rpc error: code = NotFound desc = could not find container \"1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990\": container with ID starting with 1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990 not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.436886 4713 scope.go:117] "RemoveContainer" containerID="c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6" Jan 26 15:57:39 crc kubenswrapper[4713]: E0126 15:57:39.437303 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6\": container with ID starting with c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6 not found: ID does not exist" containerID="c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.437324 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6"} err="failed to get container status \"c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6\": rpc error: code = NotFound desc = could not find container \"c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6\": container with ID starting with c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6 not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.437338 4713 scope.go:117] "RemoveContainer" containerID="6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c" Jan 26 15:57:39 crc kubenswrapper[4713]: E0126 15:57:39.437867 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c\": container with ID starting with 6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c not found: ID does not exist" containerID="6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.437916 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c"} err="failed to get container status \"6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c\": rpc error: code = NotFound desc = could not find container \"6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c\": container with ID starting with 6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.437948 4713 scope.go:117] "RemoveContainer" containerID="07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8" Jan 26 15:57:39 crc kubenswrapper[4713]: E0126 15:57:39.438311 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8\": container with ID starting with 07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8 not found: ID does not exist" containerID="07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.438340 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8"} err="failed to get container status \"07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8\": rpc error: code = NotFound desc = could not find container \"07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8\": container with ID starting with 07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8 not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.438382 4713 scope.go:117] "RemoveContainer" containerID="1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.438658 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990"} err="failed to get container status \"1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990\": rpc error: code = NotFound desc = could not find container \"1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990\": container with ID starting with 1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990 not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.438679 4713 scope.go:117] "RemoveContainer" containerID="c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.438943 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6"} err="failed to get container status \"c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6\": rpc error: code = NotFound desc = could not find container \"c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6\": container with ID starting with c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6 not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.438968 4713 scope.go:117] "RemoveContainer" containerID="6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.439210 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c"} err="failed to get container status \"6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c\": rpc error: code = NotFound desc = could not find container \"6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c\": container with ID starting with 6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.439229 4713 scope.go:117] "RemoveContainer" containerID="07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.439497 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8"} err="failed to get container status \"07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8\": rpc error: code = NotFound desc = could not find container \"07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8\": container with ID starting with 07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8 not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.439520 4713 scope.go:117] "RemoveContainer" containerID="1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.439778 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990"} err="failed to get container status \"1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990\": rpc error: code = NotFound desc = could not find container \"1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990\": container with ID starting with 1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990 not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.439801 4713 scope.go:117] "RemoveContainer" containerID="c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.439983 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6"} err="failed to get container status \"c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6\": rpc error: code = NotFound desc = could not find container \"c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6\": container with ID starting with c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6 not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.439998 4713 scope.go:117] "RemoveContainer" containerID="6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.440327 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c"} err="failed to get container status \"6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c\": rpc error: code = NotFound desc = could not find container \"6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c\": container with ID starting with 6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.440343 4713 scope.go:117] "RemoveContainer" containerID="07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.440688 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8"} err="failed to get container status \"07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8\": rpc error: code = NotFound desc = could not find container \"07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8\": container with ID starting with 07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8 not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.440707 4713 scope.go:117] "RemoveContainer" containerID="1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.440903 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990"} err="failed to get container status \"1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990\": rpc error: code = NotFound desc = could not find container \"1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990\": container with ID starting with 1d795d379ec85f3f898b913d1b329327635ffc955b984f5111584fcd05f9b990 not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.440921 4713 scope.go:117] "RemoveContainer" containerID="c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.441123 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6"} err="failed to get container status \"c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6\": rpc error: code = NotFound desc = could not find container \"c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6\": container with ID starting with c1e6b252e6464dba282135c4605e871a6c8fa6c806965db54653dc733b45bea6 not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.441135 4713 scope.go:117] "RemoveContainer" containerID="6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.441337 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c"} err="failed to get container status \"6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c\": rpc error: code = NotFound desc = could not find container \"6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c\": container with ID starting with 6f2c9a80abfa9c0cc00f60269c65d5940ebaba6211721801e5852aec0a4c6a1c not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.441353 4713 scope.go:117] "RemoveContainer" containerID="07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.441570 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8"} err="failed to get container status \"07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8\": rpc error: code = NotFound desc = could not find container \"07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8\": container with ID starting with 07759b07e111da91319eb67abd803cbfed438b8c68e869cc961ad70511792cc8 not found: ID does not exist" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.591935 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.601278 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.624603 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:39 crc kubenswrapper[4713]: E0126 15:57:39.625741 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="ceilometer-notification-agent" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.627709 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="ceilometer-notification-agent" Jan 26 15:57:39 crc kubenswrapper[4713]: E0126 15:57:39.627961 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="ceilometer-central-agent" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.628045 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="ceilometer-central-agent" Jan 26 15:57:39 crc kubenswrapper[4713]: E0126 15:57:39.628139 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="proxy-httpd" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.628209 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="proxy-httpd" Jan 26 15:57:39 crc kubenswrapper[4713]: E0126 15:57:39.628281 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="sg-core" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.628400 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="sg-core" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.628863 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="ceilometer-notification-agent" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.628960 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="sg-core" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.629051 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="proxy-httpd" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.629132 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" containerName="ceilometer-central-agent" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.635167 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.637610 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.637755 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.640138 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.714168 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.714242 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.715471 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-config-data\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.715578 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-scripts\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.715631 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlcvq\" (UniqueName: \"kubernetes.io/projected/4964d9b2-31fe-4280-9b46-a50c4491da29-kube-api-access-zlcvq\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.715694 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4964d9b2-31fe-4280-9b46-a50c4491da29-run-httpd\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.715780 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4964d9b2-31fe-4280-9b46-a50c4491da29-log-httpd\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.818978 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9" path="/var/lib/kubelet/pods/6b86ab8a-dd28-4a3f-9f7d-b7791a6185c9/volumes" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.820889 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.821132 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.821727 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-config-data\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.822034 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-scripts\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.822517 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlcvq\" (UniqueName: \"kubernetes.io/projected/4964d9b2-31fe-4280-9b46-a50c4491da29-kube-api-access-zlcvq\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.822722 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4964d9b2-31fe-4280-9b46-a50c4491da29-run-httpd\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.823247 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4964d9b2-31fe-4280-9b46-a50c4491da29-run-httpd\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.823473 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4964d9b2-31fe-4280-9b46-a50c4491da29-log-httpd\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.823751 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4964d9b2-31fe-4280-9b46-a50c4491da29-log-httpd\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.826177 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-scripts\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.827753 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.828692 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-config-data\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.829761 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.843940 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlcvq\" (UniqueName: \"kubernetes.io/projected/4964d9b2-31fe-4280-9b46-a50c4491da29-kube-api-access-zlcvq\") pod \"ceilometer-0\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " pod="openstack/ceilometer-0" Jan 26 15:57:39 crc kubenswrapper[4713]: I0126 15:57:39.990796 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:57:40 crc kubenswrapper[4713]: W0126 15:57:40.475092 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4964d9b2_31fe_4280_9b46_a50c4491da29.slice/crio-7646cdedc516d0aea69d59cabbd9620e3f46a3fd58ad8ad1f443fdbbf69f90c6 WatchSource:0}: Error finding container 7646cdedc516d0aea69d59cabbd9620e3f46a3fd58ad8ad1f443fdbbf69f90c6: Status 404 returned error can't find the container with id 7646cdedc516d0aea69d59cabbd9620e3f46a3fd58ad8ad1f443fdbbf69f90c6 Jan 26 15:57:40 crc kubenswrapper[4713]: I0126 15:57:40.479400 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.098637 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-254xv"] Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.100736 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.111199 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.111255 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pnnhw" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.111251 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.137173 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-254xv"] Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.158880 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="e2949cfe-9664-49c5-8d4b-53f4b57bb8b2" containerName="cloudkitty-api" probeResult="failure" output="Get \"http://10.217.0.185:8889/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.163394 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-scripts\") pod \"nova-cell0-conductor-db-sync-254xv\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.163676 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-config-data\") pod \"nova-cell0-conductor-db-sync-254xv\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.163759 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99pft\" (UniqueName: \"kubernetes.io/projected/edbeb019-b62a-41e2-8af4-63acbe2e0adb-kube-api-access-99pft\") pod \"nova-cell0-conductor-db-sync-254xv\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.163791 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-254xv\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.213555 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.213594 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.266092 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.266411 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-config-data\") pod \"nova-cell0-conductor-db-sync-254xv\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.266494 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99pft\" (UniqueName: \"kubernetes.io/projected/edbeb019-b62a-41e2-8af4-63acbe2e0adb-kube-api-access-99pft\") pod \"nova-cell0-conductor-db-sync-254xv\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.266523 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-254xv\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.266613 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-scripts\") pod \"nova-cell0-conductor-db-sync-254xv\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.273706 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-scripts\") pod \"nova-cell0-conductor-db-sync-254xv\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.283853 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-254xv\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.284137 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.284744 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-config-data\") pod \"nova-cell0-conductor-db-sync-254xv\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.299274 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99pft\" (UniqueName: \"kubernetes.io/projected/edbeb019-b62a-41e2-8af4-63acbe2e0adb-kube-api-access-99pft\") pod \"nova-cell0-conductor-db-sync-254xv\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.332424 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4964d9b2-31fe-4280-9b46-a50c4491da29","Type":"ContainerStarted","Data":"8bfc2a314e3fe56d01d33d4daaa216d054c97ed5147ce5b5af41800594d52cd3"} Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.332494 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.332510 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4964d9b2-31fe-4280-9b46-a50c4491da29","Type":"ContainerStarted","Data":"7646cdedc516d0aea69d59cabbd9620e3f46a3fd58ad8ad1f443fdbbf69f90c6"} Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.332795 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 15:57:41 crc kubenswrapper[4713]: I0126 15:57:41.472844 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:57:42 crc kubenswrapper[4713]: I0126 15:57:42.035496 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-254xv"] Jan 26 15:57:42 crc kubenswrapper[4713]: I0126 15:57:42.352930 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-254xv" event={"ID":"edbeb019-b62a-41e2-8af4-63acbe2e0adb","Type":"ContainerStarted","Data":"29fde5a983950a2f88ab61dd3b6f0b7f5186bba0a45380a53e87f686239fcff4"} Jan 26 15:57:42 crc kubenswrapper[4713]: I0126 15:57:42.357814 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4964d9b2-31fe-4280-9b46-a50c4491da29","Type":"ContainerStarted","Data":"fa45eafcdea8b7a82b9eb66e1063887f4d106219ba933c0dbf7b5f72546cb5fa"} Jan 26 15:57:43 crc kubenswrapper[4713]: I0126 15:57:43.335313 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:43 crc kubenswrapper[4713]: I0126 15:57:43.335932 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:43 crc kubenswrapper[4713]: I0126 15:57:43.380917 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5ee23a80-20ad-45b5-9670-c165085175ab","Type":"ContainerStarted","Data":"58f056d09217deb6e461ff0426c23e1f32713c1fdaee4dcfe3ee4665f3f1a9d7"} Jan 26 15:57:43 crc kubenswrapper[4713]: I0126 15:57:43.386182 4713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:57:43 crc kubenswrapper[4713]: I0126 15:57:43.386236 4713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:57:43 crc kubenswrapper[4713]: I0126 15:57:43.386189 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4964d9b2-31fe-4280-9b46-a50c4491da29","Type":"ContainerStarted","Data":"91e030b98626f979adcbc4e92a4494c19de6a52248e91415217a9daea85ff36f"} Jan 26 15:57:43 crc kubenswrapper[4713]: I0126 15:57:43.389344 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:43 crc kubenswrapper[4713]: I0126 15:57:43.390139 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:43 crc kubenswrapper[4713]: I0126 15:57:43.395036 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:43 crc kubenswrapper[4713]: I0126 15:57:43.411225 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.183033088 podStartE2EDuration="45.411196267s" podCreationTimestamp="2026-01-26 15:56:58 +0000 UTC" firstStartedPulling="2026-01-26 15:57:00.130055747 +0000 UTC m=+1395.267072982" lastFinishedPulling="2026-01-26 15:57:42.358218926 +0000 UTC m=+1437.495236161" observedRunningTime="2026-01-26 15:57:43.399608021 +0000 UTC m=+1438.536625256" watchObservedRunningTime="2026-01-26 15:57:43.411196267 +0000 UTC m=+1438.548213502" Jan 26 15:57:44 crc kubenswrapper[4713]: I0126 15:57:44.361070 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 15:57:44 crc kubenswrapper[4713]: I0126 15:57:44.361512 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 15:57:44 crc kubenswrapper[4713]: I0126 15:57:44.399209 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:45 crc kubenswrapper[4713]: I0126 15:57:45.408270 4713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:57:46 crc kubenswrapper[4713]: I0126 15:57:46.421929 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4964d9b2-31fe-4280-9b46-a50c4491da29","Type":"ContainerStarted","Data":"264717f125836c44b10099fd4024d6d00d0bcac614118eb090dfc2a71d295567"} Jan 26 15:57:46 crc kubenswrapper[4713]: I0126 15:57:46.422849 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:57:46 crc kubenswrapper[4713]: I0126 15:57:46.475463 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.024062127 podStartE2EDuration="7.47544231s" podCreationTimestamp="2026-01-26 15:57:39 +0000 UTC" firstStartedPulling="2026-01-26 15:57:40.477973795 +0000 UTC m=+1435.614991030" lastFinishedPulling="2026-01-26 15:57:45.929353978 +0000 UTC m=+1441.066371213" observedRunningTime="2026-01-26 15:57:46.459167852 +0000 UTC m=+1441.596185087" watchObservedRunningTime="2026-01-26 15:57:46.47544231 +0000 UTC m=+1441.612459545" Jan 26 15:57:46 crc kubenswrapper[4713]: I0126 15:57:46.631183 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:46 crc kubenswrapper[4713]: I0126 15:57:46.631312 4713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:57:47 crc kubenswrapper[4713]: I0126 15:57:47.083294 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 15:57:47 crc kubenswrapper[4713]: I0126 15:57:47.773007 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:48 crc kubenswrapper[4713]: I0126 15:57:48.448600 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="ceilometer-central-agent" containerID="cri-o://8bfc2a314e3fe56d01d33d4daaa216d054c97ed5147ce5b5af41800594d52cd3" gracePeriod=30 Jan 26 15:57:48 crc kubenswrapper[4713]: I0126 15:57:48.449269 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="proxy-httpd" containerID="cri-o://264717f125836c44b10099fd4024d6d00d0bcac614118eb090dfc2a71d295567" gracePeriod=30 Jan 26 15:57:48 crc kubenswrapper[4713]: I0126 15:57:48.456490 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="sg-core" containerID="cri-o://91e030b98626f979adcbc4e92a4494c19de6a52248e91415217a9daea85ff36f" gracePeriod=30 Jan 26 15:57:48 crc kubenswrapper[4713]: I0126 15:57:48.456672 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="ceilometer-notification-agent" containerID="cri-o://fa45eafcdea8b7a82b9eb66e1063887f4d106219ba933c0dbf7b5f72546cb5fa" gracePeriod=30 Jan 26 15:57:49 crc kubenswrapper[4713]: I0126 15:57:49.461930 4713 generic.go:334] "Generic (PLEG): container finished" podID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerID="264717f125836c44b10099fd4024d6d00d0bcac614118eb090dfc2a71d295567" exitCode=0 Jan 26 15:57:49 crc kubenswrapper[4713]: I0126 15:57:49.462260 4713 generic.go:334] "Generic (PLEG): container finished" podID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerID="91e030b98626f979adcbc4e92a4494c19de6a52248e91415217a9daea85ff36f" exitCode=2 Jan 26 15:57:49 crc kubenswrapper[4713]: I0126 15:57:49.462273 4713 generic.go:334] "Generic (PLEG): container finished" podID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerID="fa45eafcdea8b7a82b9eb66e1063887f4d106219ba933c0dbf7b5f72546cb5fa" exitCode=0 Jan 26 15:57:49 crc kubenswrapper[4713]: I0126 15:57:49.462298 4713 generic.go:334] "Generic (PLEG): container finished" podID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerID="8bfc2a314e3fe56d01d33d4daaa216d054c97ed5147ce5b5af41800594d52cd3" exitCode=0 Jan 26 15:57:49 crc kubenswrapper[4713]: I0126 15:57:49.462088 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4964d9b2-31fe-4280-9b46-a50c4491da29","Type":"ContainerDied","Data":"264717f125836c44b10099fd4024d6d00d0bcac614118eb090dfc2a71d295567"} Jan 26 15:57:49 crc kubenswrapper[4713]: I0126 15:57:49.462334 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4964d9b2-31fe-4280-9b46-a50c4491da29","Type":"ContainerDied","Data":"91e030b98626f979adcbc4e92a4494c19de6a52248e91415217a9daea85ff36f"} Jan 26 15:57:49 crc kubenswrapper[4713]: I0126 15:57:49.462348 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4964d9b2-31fe-4280-9b46-a50c4491da29","Type":"ContainerDied","Data":"fa45eafcdea8b7a82b9eb66e1063887f4d106219ba933c0dbf7b5f72546cb5fa"} Jan 26 15:57:49 crc kubenswrapper[4713]: I0126 15:57:49.462380 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4964d9b2-31fe-4280-9b46-a50c4491da29","Type":"ContainerDied","Data":"8bfc2a314e3fe56d01d33d4daaa216d054c97ed5147ce5b5af41800594d52cd3"} Jan 26 15:57:51 crc kubenswrapper[4713]: I0126 15:57:51.075635 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x4wkg"] Jan 26 15:57:51 crc kubenswrapper[4713]: I0126 15:57:51.078270 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:57:51 crc kubenswrapper[4713]: I0126 15:57:51.091706 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4wkg"] Jan 26 15:57:51 crc kubenswrapper[4713]: I0126 15:57:51.139930 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a3c5546-2325-43de-882f-f5e9460ff920-catalog-content\") pod \"redhat-operators-x4wkg\" (UID: \"9a3c5546-2325-43de-882f-f5e9460ff920\") " pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:57:51 crc kubenswrapper[4713]: I0126 15:57:51.140038 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a3c5546-2325-43de-882f-f5e9460ff920-utilities\") pod \"redhat-operators-x4wkg\" (UID: \"9a3c5546-2325-43de-882f-f5e9460ff920\") " pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:57:51 crc kubenswrapper[4713]: I0126 15:57:51.140101 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwbld\" (UniqueName: \"kubernetes.io/projected/9a3c5546-2325-43de-882f-f5e9460ff920-kube-api-access-vwbld\") pod \"redhat-operators-x4wkg\" (UID: \"9a3c5546-2325-43de-882f-f5e9460ff920\") " pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:57:51 crc kubenswrapper[4713]: I0126 15:57:51.242099 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a3c5546-2325-43de-882f-f5e9460ff920-catalog-content\") pod \"redhat-operators-x4wkg\" (UID: \"9a3c5546-2325-43de-882f-f5e9460ff920\") " pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:57:51 crc kubenswrapper[4713]: I0126 15:57:51.242193 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a3c5546-2325-43de-882f-f5e9460ff920-utilities\") pod \"redhat-operators-x4wkg\" (UID: \"9a3c5546-2325-43de-882f-f5e9460ff920\") " pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:57:51 crc kubenswrapper[4713]: I0126 15:57:51.242252 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwbld\" (UniqueName: \"kubernetes.io/projected/9a3c5546-2325-43de-882f-f5e9460ff920-kube-api-access-vwbld\") pod \"redhat-operators-x4wkg\" (UID: \"9a3c5546-2325-43de-882f-f5e9460ff920\") " pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:57:51 crc kubenswrapper[4713]: I0126 15:57:51.242851 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a3c5546-2325-43de-882f-f5e9460ff920-utilities\") pod \"redhat-operators-x4wkg\" (UID: \"9a3c5546-2325-43de-882f-f5e9460ff920\") " pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:57:51 crc kubenswrapper[4713]: I0126 15:57:51.242876 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a3c5546-2325-43de-882f-f5e9460ff920-catalog-content\") pod \"redhat-operators-x4wkg\" (UID: \"9a3c5546-2325-43de-882f-f5e9460ff920\") " pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:57:51 crc kubenswrapper[4713]: I0126 15:57:51.269144 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwbld\" (UniqueName: \"kubernetes.io/projected/9a3c5546-2325-43de-882f-f5e9460ff920-kube-api-access-vwbld\") pod \"redhat-operators-x4wkg\" (UID: \"9a3c5546-2325-43de-882f-f5e9460ff920\") " pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:57:51 crc kubenswrapper[4713]: I0126 15:57:51.397962 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.467812 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.530087 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-scripts\") pod \"4964d9b2-31fe-4280-9b46-a50c4491da29\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.530147 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-config-data\") pod \"4964d9b2-31fe-4280-9b46-a50c4491da29\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.530167 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-sg-core-conf-yaml\") pod \"4964d9b2-31fe-4280-9b46-a50c4491da29\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.530326 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4964d9b2-31fe-4280-9b46-a50c4491da29-run-httpd\") pod \"4964d9b2-31fe-4280-9b46-a50c4491da29\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.530369 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4964d9b2-31fe-4280-9b46-a50c4491da29-log-httpd\") pod \"4964d9b2-31fe-4280-9b46-a50c4491da29\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.530418 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlcvq\" (UniqueName: \"kubernetes.io/projected/4964d9b2-31fe-4280-9b46-a50c4491da29-kube-api-access-zlcvq\") pod \"4964d9b2-31fe-4280-9b46-a50c4491da29\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.530465 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-combined-ca-bundle\") pod \"4964d9b2-31fe-4280-9b46-a50c4491da29\" (UID: \"4964d9b2-31fe-4280-9b46-a50c4491da29\") " Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.531285 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4964d9b2-31fe-4280-9b46-a50c4491da29-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4964d9b2-31fe-4280-9b46-a50c4491da29" (UID: "4964d9b2-31fe-4280-9b46-a50c4491da29"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.531404 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4964d9b2-31fe-4280-9b46-a50c4491da29-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4964d9b2-31fe-4280-9b46-a50c4491da29" (UID: "4964d9b2-31fe-4280-9b46-a50c4491da29"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.539319 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4964d9b2-31fe-4280-9b46-a50c4491da29-kube-api-access-zlcvq" (OuterVolumeSpecName: "kube-api-access-zlcvq") pod "4964d9b2-31fe-4280-9b46-a50c4491da29" (UID: "4964d9b2-31fe-4280-9b46-a50c4491da29"). InnerVolumeSpecName "kube-api-access-zlcvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.539686 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-scripts" (OuterVolumeSpecName: "scripts") pod "4964d9b2-31fe-4280-9b46-a50c4491da29" (UID: "4964d9b2-31fe-4280-9b46-a50c4491da29"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.561254 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4964d9b2-31fe-4280-9b46-a50c4491da29","Type":"ContainerDied","Data":"7646cdedc516d0aea69d59cabbd9620e3f46a3fd58ad8ad1f443fdbbf69f90c6"} Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.561299 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.561307 4713 scope.go:117] "RemoveContainer" containerID="264717f125836c44b10099fd4024d6d00d0bcac614118eb090dfc2a71d295567" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.562920 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-254xv" event={"ID":"edbeb019-b62a-41e2-8af4-63acbe2e0adb","Type":"ContainerStarted","Data":"e0c6717ca6930d27a1658f2054f3e7c3aee59140d219f0eccb6d2fc62eff4904"} Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.584535 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4964d9b2-31fe-4280-9b46-a50c4491da29" (UID: "4964d9b2-31fe-4280-9b46-a50c4491da29"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.584560 4713 scope.go:117] "RemoveContainer" containerID="91e030b98626f979adcbc4e92a4494c19de6a52248e91415217a9daea85ff36f" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.591813 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-254xv" podStartSLOduration=1.625578897 podStartE2EDuration="13.591792859s" podCreationTimestamp="2026-01-26 15:57:41 +0000 UTC" firstStartedPulling="2026-01-26 15:57:42.046070817 +0000 UTC m=+1437.183088062" lastFinishedPulling="2026-01-26 15:57:54.012284799 +0000 UTC m=+1449.149302024" observedRunningTime="2026-01-26 15:57:54.588892388 +0000 UTC m=+1449.725909623" watchObservedRunningTime="2026-01-26 15:57:54.591792859 +0000 UTC m=+1449.728810094" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.620268 4713 scope.go:117] "RemoveContainer" containerID="fa45eafcdea8b7a82b9eb66e1063887f4d106219ba933c0dbf7b5f72546cb5fa" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.633610 4713 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.633640 4713 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4964d9b2-31fe-4280-9b46-a50c4491da29-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.633650 4713 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4964d9b2-31fe-4280-9b46-a50c4491da29-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.633661 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlcvq\" (UniqueName: \"kubernetes.io/projected/4964d9b2-31fe-4280-9b46-a50c4491da29-kube-api-access-zlcvq\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.633670 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.635469 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4964d9b2-31fe-4280-9b46-a50c4491da29" (UID: "4964d9b2-31fe-4280-9b46-a50c4491da29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.646637 4713 scope.go:117] "RemoveContainer" containerID="8bfc2a314e3fe56d01d33d4daaa216d054c97ed5147ce5b5af41800594d52cd3" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.674562 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4wkg"] Jan 26 15:57:54 crc kubenswrapper[4713]: W0126 15:57:54.675717 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a3c5546_2325_43de_882f_f5e9460ff920.slice/crio-1118381960ab6d1ce5fb5e3bd99a1f8232be65e0e2d3f5255cd7dcb60f3cfc50 WatchSource:0}: Error finding container 1118381960ab6d1ce5fb5e3bd99a1f8232be65e0e2d3f5255cd7dcb60f3cfc50: Status 404 returned error can't find the container with id 1118381960ab6d1ce5fb5e3bd99a1f8232be65e0e2d3f5255cd7dcb60f3cfc50 Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.693562 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-config-data" (OuterVolumeSpecName: "config-data") pod "4964d9b2-31fe-4280-9b46-a50c4491da29" (UID: "4964d9b2-31fe-4280-9b46-a50c4491da29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.735677 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.736012 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4964d9b2-31fe-4280-9b46-a50c4491da29-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.956279 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.976896 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.987898 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:54 crc kubenswrapper[4713]: E0126 15:57:54.988589 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="proxy-httpd" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.988614 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="proxy-httpd" Jan 26 15:57:54 crc kubenswrapper[4713]: E0126 15:57:54.988632 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="sg-core" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.988640 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="sg-core" Jan 26 15:57:54 crc kubenswrapper[4713]: E0126 15:57:54.988674 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="ceilometer-central-agent" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.988682 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="ceilometer-central-agent" Jan 26 15:57:54 crc kubenswrapper[4713]: E0126 15:57:54.988700 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="ceilometer-notification-agent" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.988708 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="ceilometer-notification-agent" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.989935 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="ceilometer-central-agent" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.989967 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="ceilometer-notification-agent" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.989984 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="sg-core" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.990018 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" containerName="proxy-httpd" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.992710 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.995380 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:57:54 crc kubenswrapper[4713]: I0126 15:57:54.997581 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.007205 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.045932 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mpph\" (UniqueName: \"kubernetes.io/projected/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-kube-api-access-8mpph\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.045991 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-log-httpd\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.046044 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.046796 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-run-httpd\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.047071 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-scripts\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.047297 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.047353 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-config-data\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.149542 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.149623 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-run-httpd\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.149706 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-scripts\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.149770 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.149795 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-config-data\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.149865 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mpph\" (UniqueName: \"kubernetes.io/projected/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-kube-api-access-8mpph\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.149910 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-log-httpd\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.150270 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-run-httpd\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.150378 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-log-httpd\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.154301 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.155839 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.163466 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-scripts\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.166015 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-config-data\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.170533 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mpph\" (UniqueName: \"kubernetes.io/projected/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-kube-api-access-8mpph\") pod \"ceilometer-0\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.347258 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.584997 4713 generic.go:334] "Generic (PLEG): container finished" podID="9a3c5546-2325-43de-882f-f5e9460ff920" containerID="8b7d4b9ccb22a97cd6c6b32752f774aae1793b45c93aa6828a6892cbabaf79e8" exitCode=0 Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.585375 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4wkg" event={"ID":"9a3c5546-2325-43de-882f-f5e9460ff920","Type":"ContainerDied","Data":"8b7d4b9ccb22a97cd6c6b32752f774aae1793b45c93aa6828a6892cbabaf79e8"} Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.585405 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4wkg" event={"ID":"9a3c5546-2325-43de-882f-f5e9460ff920","Type":"ContainerStarted","Data":"1118381960ab6d1ce5fb5e3bd99a1f8232be65e0e2d3f5255cd7dcb60f3cfc50"} Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.820480 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4964d9b2-31fe-4280-9b46-a50c4491da29" path="/var/lib/kubelet/pods/4964d9b2-31fe-4280-9b46-a50c4491da29/volumes" Jan 26 15:57:55 crc kubenswrapper[4713]: I0126 15:57:55.888481 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:56 crc kubenswrapper[4713]: I0126 15:57:56.600567 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724","Type":"ContainerStarted","Data":"8238f390c017b83ae05ae24703401bab790c2afca437ff410c0f11ad11822e50"} Jan 26 15:57:56 crc kubenswrapper[4713]: I0126 15:57:56.660857 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:57:57 crc kubenswrapper[4713]: I0126 15:57:57.612281 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4wkg" event={"ID":"9a3c5546-2325-43de-882f-f5e9460ff920","Type":"ContainerStarted","Data":"e96c0fd028a9a4826f272a4171e9e5da1dedcac017b44e07ffff60ea0e22a1c5"} Jan 26 15:57:57 crc kubenswrapper[4713]: I0126 15:57:57.614265 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724","Type":"ContainerStarted","Data":"d05c028012ff9ab8a291af016a0a001d1e7ffd3c379cc4594013537b2adc0c29"} Jan 26 15:57:58 crc kubenswrapper[4713]: I0126 15:57:58.629081 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724","Type":"ContainerStarted","Data":"4674f3f1e02a9f47c2368c3bba354b4f7b4c3a455317d3217327d45a215f3f02"} Jan 26 15:57:58 crc kubenswrapper[4713]: I0126 15:57:58.633290 4713 generic.go:334] "Generic (PLEG): container finished" podID="9a3c5546-2325-43de-882f-f5e9460ff920" containerID="e96c0fd028a9a4826f272a4171e9e5da1dedcac017b44e07ffff60ea0e22a1c5" exitCode=0 Jan 26 15:57:58 crc kubenswrapper[4713]: I0126 15:57:58.633355 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4wkg" event={"ID":"9a3c5546-2325-43de-882f-f5e9460ff920","Type":"ContainerDied","Data":"e96c0fd028a9a4826f272a4171e9e5da1dedcac017b44e07ffff60ea0e22a1c5"} Jan 26 15:57:59 crc kubenswrapper[4713]: I0126 15:57:59.645203 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724","Type":"ContainerStarted","Data":"98c1a9191a3267fb70b9fa0bc6a61fd7c2f7a55fa8a54f1daf9d2be083ee54e3"} Jan 26 15:57:59 crc kubenswrapper[4713]: I0126 15:57:59.647475 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4wkg" event={"ID":"9a3c5546-2325-43de-882f-f5e9460ff920","Type":"ContainerStarted","Data":"8878998668b425bc0dcb321994b8f78180677cc394d8dfa13bab97d04adb48d1"} Jan 26 15:58:00 crc kubenswrapper[4713]: I0126 15:58:00.684386 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x4wkg" podStartSLOduration=6.198737214 podStartE2EDuration="9.684354204s" podCreationTimestamp="2026-01-26 15:57:51 +0000 UTC" firstStartedPulling="2026-01-26 15:57:55.587803779 +0000 UTC m=+1450.724821044" lastFinishedPulling="2026-01-26 15:57:59.073420799 +0000 UTC m=+1454.210438034" observedRunningTime="2026-01-26 15:58:00.682353128 +0000 UTC m=+1455.819370363" watchObservedRunningTime="2026-01-26 15:58:00.684354204 +0000 UTC m=+1455.821371439" Jan 26 15:58:01 crc kubenswrapper[4713]: I0126 15:58:01.399499 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:58:01 crc kubenswrapper[4713]: I0126 15:58:01.399799 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:58:01 crc kubenswrapper[4713]: I0126 15:58:01.671900 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="ceilometer-central-agent" containerID="cri-o://d05c028012ff9ab8a291af016a0a001d1e7ffd3c379cc4594013537b2adc0c29" gracePeriod=30 Jan 26 15:58:01 crc kubenswrapper[4713]: I0126 15:58:01.672227 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724","Type":"ContainerStarted","Data":"d11d622bb50ea71f65649d10c6822c1ac54fa1a263d99a95dd8de530962215e9"} Jan 26 15:58:01 crc kubenswrapper[4713]: I0126 15:58:01.672282 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:58:01 crc kubenswrapper[4713]: I0126 15:58:01.672627 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="proxy-httpd" containerID="cri-o://d11d622bb50ea71f65649d10c6822c1ac54fa1a263d99a95dd8de530962215e9" gracePeriod=30 Jan 26 15:58:01 crc kubenswrapper[4713]: I0126 15:58:01.672695 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="sg-core" containerID="cri-o://98c1a9191a3267fb70b9fa0bc6a61fd7c2f7a55fa8a54f1daf9d2be083ee54e3" gracePeriod=30 Jan 26 15:58:01 crc kubenswrapper[4713]: I0126 15:58:01.672738 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="ceilometer-notification-agent" containerID="cri-o://4674f3f1e02a9f47c2368c3bba354b4f7b4c3a455317d3217327d45a215f3f02" gracePeriod=30 Jan 26 15:58:01 crc kubenswrapper[4713]: I0126 15:58:01.705480 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.751721307 podStartE2EDuration="7.70545967s" podCreationTimestamp="2026-01-26 15:57:54 +0000 UTC" firstStartedPulling="2026-01-26 15:57:55.894967308 +0000 UTC m=+1451.031984543" lastFinishedPulling="2026-01-26 15:58:00.848705671 +0000 UTC m=+1455.985722906" observedRunningTime="2026-01-26 15:58:01.702798185 +0000 UTC m=+1456.839815420" watchObservedRunningTime="2026-01-26 15:58:01.70545967 +0000 UTC m=+1456.842476905" Jan 26 15:58:02 crc kubenswrapper[4713]: I0126 15:58:02.533099 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4wkg" podUID="9a3c5546-2325-43de-882f-f5e9460ff920" containerName="registry-server" probeResult="failure" output=< Jan 26 15:58:02 crc kubenswrapper[4713]: timeout: failed to connect service ":50051" within 1s Jan 26 15:58:02 crc kubenswrapper[4713]: > Jan 26 15:58:02 crc kubenswrapper[4713]: I0126 15:58:02.686072 4713 generic.go:334] "Generic (PLEG): container finished" podID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerID="d11d622bb50ea71f65649d10c6822c1ac54fa1a263d99a95dd8de530962215e9" exitCode=0 Jan 26 15:58:02 crc kubenswrapper[4713]: I0126 15:58:02.686408 4713 generic.go:334] "Generic (PLEG): container finished" podID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerID="98c1a9191a3267fb70b9fa0bc6a61fd7c2f7a55fa8a54f1daf9d2be083ee54e3" exitCode=2 Jan 26 15:58:02 crc kubenswrapper[4713]: I0126 15:58:02.686417 4713 generic.go:334] "Generic (PLEG): container finished" podID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerID="4674f3f1e02a9f47c2368c3bba354b4f7b4c3a455317d3217327d45a215f3f02" exitCode=0 Jan 26 15:58:02 crc kubenswrapper[4713]: I0126 15:58:02.686425 4713 generic.go:334] "Generic (PLEG): container finished" podID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerID="d05c028012ff9ab8a291af016a0a001d1e7ffd3c379cc4594013537b2adc0c29" exitCode=0 Jan 26 15:58:02 crc kubenswrapper[4713]: I0126 15:58:02.686211 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724","Type":"ContainerDied","Data":"d11d622bb50ea71f65649d10c6822c1ac54fa1a263d99a95dd8de530962215e9"} Jan 26 15:58:02 crc kubenswrapper[4713]: I0126 15:58:02.686465 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724","Type":"ContainerDied","Data":"98c1a9191a3267fb70b9fa0bc6a61fd7c2f7a55fa8a54f1daf9d2be083ee54e3"} Jan 26 15:58:02 crc kubenswrapper[4713]: I0126 15:58:02.686482 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724","Type":"ContainerDied","Data":"4674f3f1e02a9f47c2368c3bba354b4f7b4c3a455317d3217327d45a215f3f02"} Jan 26 15:58:02 crc kubenswrapper[4713]: I0126 15:58:02.686493 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724","Type":"ContainerDied","Data":"d05c028012ff9ab8a291af016a0a001d1e7ffd3c379cc4594013537b2adc0c29"} Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.135112 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.235994 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-config-data\") pod \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.236515 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-scripts\") pod \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.236585 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-run-httpd\") pod \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.236764 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-combined-ca-bundle\") pod \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.236806 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-sg-core-conf-yaml\") pod \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.236878 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-log-httpd\") pod \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.236916 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mpph\" (UniqueName: \"kubernetes.io/projected/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-kube-api-access-8mpph\") pod \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\" (UID: \"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724\") " Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.237084 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" (UID: "d4972ab2-8ee5-4abe-a8c2-0fb4a1484724"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.238516 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" (UID: "d4972ab2-8ee5-4abe-a8c2-0fb4a1484724"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.238742 4713 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.238767 4713 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.250508 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-kube-api-access-8mpph" (OuterVolumeSpecName: "kube-api-access-8mpph") pod "d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" (UID: "d4972ab2-8ee5-4abe-a8c2-0fb4a1484724"). InnerVolumeSpecName "kube-api-access-8mpph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.257182 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-scripts" (OuterVolumeSpecName: "scripts") pod "d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" (UID: "d4972ab2-8ee5-4abe-a8c2-0fb4a1484724"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.272146 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" (UID: "d4972ab2-8ee5-4abe-a8c2-0fb4a1484724"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.325317 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" (UID: "d4972ab2-8ee5-4abe-a8c2-0fb4a1484724"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.340983 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.341234 4713 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.341324 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mpph\" (UniqueName: \"kubernetes.io/projected/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-kube-api-access-8mpph\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.341442 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.361288 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-config-data" (OuterVolumeSpecName: "config-data") pod "d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" (UID: "d4972ab2-8ee5-4abe-a8c2-0fb4a1484724"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.443661 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.703730 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4972ab2-8ee5-4abe-a8c2-0fb4a1484724","Type":"ContainerDied","Data":"8238f390c017b83ae05ae24703401bab790c2afca437ff410c0f11ad11822e50"} Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.703790 4713 scope.go:117] "RemoveContainer" containerID="d11d622bb50ea71f65649d10c6822c1ac54fa1a263d99a95dd8de530962215e9" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.703834 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.738326 4713 scope.go:117] "RemoveContainer" containerID="98c1a9191a3267fb70b9fa0bc6a61fd7c2f7a55fa8a54f1daf9d2be083ee54e3" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.752059 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.767895 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.776111 4713 scope.go:117] "RemoveContainer" containerID="4674f3f1e02a9f47c2368c3bba354b4f7b4c3a455317d3217327d45a215f3f02" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.786198 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:58:03 crc kubenswrapper[4713]: E0126 15:58:03.786804 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="sg-core" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.786829 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="sg-core" Jan 26 15:58:03 crc kubenswrapper[4713]: E0126 15:58:03.786847 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="ceilometer-central-agent" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.786854 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="ceilometer-central-agent" Jan 26 15:58:03 crc kubenswrapper[4713]: E0126 15:58:03.786871 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="proxy-httpd" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.786877 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="proxy-httpd" Jan 26 15:58:03 crc kubenswrapper[4713]: E0126 15:58:03.786895 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="ceilometer-notification-agent" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.786903 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="ceilometer-notification-agent" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.787163 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="ceilometer-central-agent" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.787178 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="proxy-httpd" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.787195 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="sg-core" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.787213 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" containerName="ceilometer-notification-agent" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.789834 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.792531 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.796291 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.802290 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.817209 4713 scope.go:117] "RemoveContainer" containerID="d05c028012ff9ab8a291af016a0a001d1e7ffd3c379cc4594013537b2adc0c29" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.835215 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4972ab2-8ee5-4abe-a8c2-0fb4a1484724" path="/var/lib/kubelet/pods/d4972ab2-8ee5-4abe-a8c2-0fb4a1484724/volumes" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.854571 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.854709 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.854793 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkhx6\" (UniqueName: \"kubernetes.io/projected/d6a38ac9-97bc-481b-8e57-f799efea80a2-kube-api-access-qkhx6\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.854817 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6a38ac9-97bc-481b-8e57-f799efea80a2-log-httpd\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.854851 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-config-data\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.854898 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-scripts\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.854953 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6a38ac9-97bc-481b-8e57-f799efea80a2-run-httpd\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.956863 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6a38ac9-97bc-481b-8e57-f799efea80a2-run-httpd\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.957057 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.957144 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.957208 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6a38ac9-97bc-481b-8e57-f799efea80a2-log-httpd\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.957232 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkhx6\" (UniqueName: \"kubernetes.io/projected/d6a38ac9-97bc-481b-8e57-f799efea80a2-kube-api-access-qkhx6\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.957263 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-config-data\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.957302 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-scripts\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.957452 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6a38ac9-97bc-481b-8e57-f799efea80a2-run-httpd\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.957696 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6a38ac9-97bc-481b-8e57-f799efea80a2-log-httpd\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.960749 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.960962 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.961449 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-scripts\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.961949 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-config-data\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:03 crc kubenswrapper[4713]: I0126 15:58:03.973013 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkhx6\" (UniqueName: \"kubernetes.io/projected/d6a38ac9-97bc-481b-8e57-f799efea80a2-kube-api-access-qkhx6\") pod \"ceilometer-0\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " pod="openstack/ceilometer-0" Jan 26 15:58:04 crc kubenswrapper[4713]: I0126 15:58:04.116569 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:58:04 crc kubenswrapper[4713]: W0126 15:58:04.701927 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6a38ac9_97bc_481b_8e57_f799efea80a2.slice/crio-26d1911858a9a79dcff8ce94f83ef350245f566784685bc2089ea1f098fef3d8 WatchSource:0}: Error finding container 26d1911858a9a79dcff8ce94f83ef350245f566784685bc2089ea1f098fef3d8: Status 404 returned error can't find the container with id 26d1911858a9a79dcff8ce94f83ef350245f566784685bc2089ea1f098fef3d8 Jan 26 15:58:04 crc kubenswrapper[4713]: I0126 15:58:04.702711 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:58:04 crc kubenswrapper[4713]: I0126 15:58:04.722864 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6a38ac9-97bc-481b-8e57-f799efea80a2","Type":"ContainerStarted","Data":"26d1911858a9a79dcff8ce94f83ef350245f566784685bc2089ea1f098fef3d8"} Jan 26 15:58:06 crc kubenswrapper[4713]: I0126 15:58:06.750853 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6a38ac9-97bc-481b-8e57-f799efea80a2","Type":"ContainerStarted","Data":"318219cc0f3abf999d7b43fcf00f8a1c4c661477431eb0a3d48e6ee55b5d5678"} Jan 26 15:58:11 crc kubenswrapper[4713]: I0126 15:58:11.462227 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:58:11 crc kubenswrapper[4713]: I0126 15:58:11.534289 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:58:11 crc kubenswrapper[4713]: I0126 15:58:11.709451 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4wkg"] Jan 26 15:58:11 crc kubenswrapper[4713]: I0126 15:58:11.867982 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6a38ac9-97bc-481b-8e57-f799efea80a2","Type":"ContainerStarted","Data":"fb0cb874724cfa300aec73cebebbd6c69a1d2ab46e9f103e2b9594e59b0ae957"} Jan 26 15:58:12 crc kubenswrapper[4713]: I0126 15:58:12.882025 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6a38ac9-97bc-481b-8e57-f799efea80a2","Type":"ContainerStarted","Data":"7f4b333247ef3efbc70715c0d23fd83ba5ea65b6f0973a50c5509d4406c28014"} Jan 26 15:58:12 crc kubenswrapper[4713]: I0126 15:58:12.882169 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x4wkg" podUID="9a3c5546-2325-43de-882f-f5e9460ff920" containerName="registry-server" containerID="cri-o://8878998668b425bc0dcb321994b8f78180677cc394d8dfa13bab97d04adb48d1" gracePeriod=2 Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.557654 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.751689 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwbld\" (UniqueName: \"kubernetes.io/projected/9a3c5546-2325-43de-882f-f5e9460ff920-kube-api-access-vwbld\") pod \"9a3c5546-2325-43de-882f-f5e9460ff920\" (UID: \"9a3c5546-2325-43de-882f-f5e9460ff920\") " Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.751785 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a3c5546-2325-43de-882f-f5e9460ff920-catalog-content\") pod \"9a3c5546-2325-43de-882f-f5e9460ff920\" (UID: \"9a3c5546-2325-43de-882f-f5e9460ff920\") " Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.752025 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a3c5546-2325-43de-882f-f5e9460ff920-utilities\") pod \"9a3c5546-2325-43de-882f-f5e9460ff920\" (UID: \"9a3c5546-2325-43de-882f-f5e9460ff920\") " Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.752604 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a3c5546-2325-43de-882f-f5e9460ff920-utilities" (OuterVolumeSpecName: "utilities") pod "9a3c5546-2325-43de-882f-f5e9460ff920" (UID: "9a3c5546-2325-43de-882f-f5e9460ff920"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.752889 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a3c5546-2325-43de-882f-f5e9460ff920-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.760160 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a3c5546-2325-43de-882f-f5e9460ff920-kube-api-access-vwbld" (OuterVolumeSpecName: "kube-api-access-vwbld") pod "9a3c5546-2325-43de-882f-f5e9460ff920" (UID: "9a3c5546-2325-43de-882f-f5e9460ff920"). InnerVolumeSpecName "kube-api-access-vwbld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.839645 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a3c5546-2325-43de-882f-f5e9460ff920-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a3c5546-2325-43de-882f-f5e9460ff920" (UID: "9a3c5546-2325-43de-882f-f5e9460ff920"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.855469 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwbld\" (UniqueName: \"kubernetes.io/projected/9a3c5546-2325-43de-882f-f5e9460ff920-kube-api-access-vwbld\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.855740 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a3c5546-2325-43de-882f-f5e9460ff920-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.892502 4713 generic.go:334] "Generic (PLEG): container finished" podID="9a3c5546-2325-43de-882f-f5e9460ff920" containerID="8878998668b425bc0dcb321994b8f78180677cc394d8dfa13bab97d04adb48d1" exitCode=0 Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.892569 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4wkg" event={"ID":"9a3c5546-2325-43de-882f-f5e9460ff920","Type":"ContainerDied","Data":"8878998668b425bc0dcb321994b8f78180677cc394d8dfa13bab97d04adb48d1"} Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.893540 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4wkg" event={"ID":"9a3c5546-2325-43de-882f-f5e9460ff920","Type":"ContainerDied","Data":"1118381960ab6d1ce5fb5e3bd99a1f8232be65e0e2d3f5255cd7dcb60f3cfc50"} Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.893607 4713 scope.go:117] "RemoveContainer" containerID="8878998668b425bc0dcb321994b8f78180677cc394d8dfa13bab97d04adb48d1" Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.892587 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4wkg" Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.926024 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4wkg"] Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.932282 4713 scope.go:117] "RemoveContainer" containerID="e96c0fd028a9a4826f272a4171e9e5da1dedcac017b44e07ffff60ea0e22a1c5" Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.938757 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x4wkg"] Jan 26 15:58:13 crc kubenswrapper[4713]: I0126 15:58:13.971112 4713 scope.go:117] "RemoveContainer" containerID="8b7d4b9ccb22a97cd6c6b32752f774aae1793b45c93aa6828a6892cbabaf79e8" Jan 26 15:58:14 crc kubenswrapper[4713]: I0126 15:58:14.025677 4713 scope.go:117] "RemoveContainer" containerID="8878998668b425bc0dcb321994b8f78180677cc394d8dfa13bab97d04adb48d1" Jan 26 15:58:14 crc kubenswrapper[4713]: E0126 15:58:14.026197 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8878998668b425bc0dcb321994b8f78180677cc394d8dfa13bab97d04adb48d1\": container with ID starting with 8878998668b425bc0dcb321994b8f78180677cc394d8dfa13bab97d04adb48d1 not found: ID does not exist" containerID="8878998668b425bc0dcb321994b8f78180677cc394d8dfa13bab97d04adb48d1" Jan 26 15:58:14 crc kubenswrapper[4713]: I0126 15:58:14.026229 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8878998668b425bc0dcb321994b8f78180677cc394d8dfa13bab97d04adb48d1"} err="failed to get container status \"8878998668b425bc0dcb321994b8f78180677cc394d8dfa13bab97d04adb48d1\": rpc error: code = NotFound desc = could not find container \"8878998668b425bc0dcb321994b8f78180677cc394d8dfa13bab97d04adb48d1\": container with ID starting with 8878998668b425bc0dcb321994b8f78180677cc394d8dfa13bab97d04adb48d1 not found: ID does not exist" Jan 26 15:58:14 crc kubenswrapper[4713]: I0126 15:58:14.026258 4713 scope.go:117] "RemoveContainer" containerID="e96c0fd028a9a4826f272a4171e9e5da1dedcac017b44e07ffff60ea0e22a1c5" Jan 26 15:58:14 crc kubenswrapper[4713]: E0126 15:58:14.026510 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e96c0fd028a9a4826f272a4171e9e5da1dedcac017b44e07ffff60ea0e22a1c5\": container with ID starting with e96c0fd028a9a4826f272a4171e9e5da1dedcac017b44e07ffff60ea0e22a1c5 not found: ID does not exist" containerID="e96c0fd028a9a4826f272a4171e9e5da1dedcac017b44e07ffff60ea0e22a1c5" Jan 26 15:58:14 crc kubenswrapper[4713]: I0126 15:58:14.026551 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96c0fd028a9a4826f272a4171e9e5da1dedcac017b44e07ffff60ea0e22a1c5"} err="failed to get container status \"e96c0fd028a9a4826f272a4171e9e5da1dedcac017b44e07ffff60ea0e22a1c5\": rpc error: code = NotFound desc = could not find container \"e96c0fd028a9a4826f272a4171e9e5da1dedcac017b44e07ffff60ea0e22a1c5\": container with ID starting with e96c0fd028a9a4826f272a4171e9e5da1dedcac017b44e07ffff60ea0e22a1c5 not found: ID does not exist" Jan 26 15:58:14 crc kubenswrapper[4713]: I0126 15:58:14.026579 4713 scope.go:117] "RemoveContainer" containerID="8b7d4b9ccb22a97cd6c6b32752f774aae1793b45c93aa6828a6892cbabaf79e8" Jan 26 15:58:14 crc kubenswrapper[4713]: E0126 15:58:14.026814 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b7d4b9ccb22a97cd6c6b32752f774aae1793b45c93aa6828a6892cbabaf79e8\": container with ID starting with 8b7d4b9ccb22a97cd6c6b32752f774aae1793b45c93aa6828a6892cbabaf79e8 not found: ID does not exist" containerID="8b7d4b9ccb22a97cd6c6b32752f774aae1793b45c93aa6828a6892cbabaf79e8" Jan 26 15:58:14 crc kubenswrapper[4713]: I0126 15:58:14.026836 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b7d4b9ccb22a97cd6c6b32752f774aae1793b45c93aa6828a6892cbabaf79e8"} err="failed to get container status \"8b7d4b9ccb22a97cd6c6b32752f774aae1793b45c93aa6828a6892cbabaf79e8\": rpc error: code = NotFound desc = could not find container \"8b7d4b9ccb22a97cd6c6b32752f774aae1793b45c93aa6828a6892cbabaf79e8\": container with ID starting with 8b7d4b9ccb22a97cd6c6b32752f774aae1793b45c93aa6828a6892cbabaf79e8 not found: ID does not exist" Jan 26 15:58:15 crc kubenswrapper[4713]: I0126 15:58:15.818850 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a3c5546-2325-43de-882f-f5e9460ff920" path="/var/lib/kubelet/pods/9a3c5546-2325-43de-882f-f5e9460ff920/volumes" Jan 26 15:58:16 crc kubenswrapper[4713]: I0126 15:58:16.500348 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Jan 26 15:58:17 crc kubenswrapper[4713]: I0126 15:58:17.957547 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6a38ac9-97bc-481b-8e57-f799efea80a2","Type":"ContainerStarted","Data":"f3df10ff675db38d6526397c347046fb3c12f659a4c9093c772a3a33430cde1c"} Jan 26 15:58:17 crc kubenswrapper[4713]: I0126 15:58:17.959830 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:58:17 crc kubenswrapper[4713]: I0126 15:58:17.997964 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.157454517 podStartE2EDuration="14.997942547s" podCreationTimestamp="2026-01-26 15:58:03 +0000 UTC" firstStartedPulling="2026-01-26 15:58:04.710288132 +0000 UTC m=+1459.847305377" lastFinishedPulling="2026-01-26 15:58:16.550776152 +0000 UTC m=+1471.687793407" observedRunningTime="2026-01-26 15:58:17.986059743 +0000 UTC m=+1473.123076978" watchObservedRunningTime="2026-01-26 15:58:17.997942547 +0000 UTC m=+1473.134959782" Jan 26 15:58:24 crc kubenswrapper[4713]: I0126 15:58:24.043846 4713 generic.go:334] "Generic (PLEG): container finished" podID="edbeb019-b62a-41e2-8af4-63acbe2e0adb" containerID="e0c6717ca6930d27a1658f2054f3e7c3aee59140d219f0eccb6d2fc62eff4904" exitCode=0 Jan 26 15:58:24 crc kubenswrapper[4713]: I0126 15:58:24.043937 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-254xv" event={"ID":"edbeb019-b62a-41e2-8af4-63acbe2e0adb","Type":"ContainerDied","Data":"e0c6717ca6930d27a1658f2054f3e7c3aee59140d219f0eccb6d2fc62eff4904"} Jan 26 15:58:25 crc kubenswrapper[4713]: I0126 15:58:25.543152 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:58:25 crc kubenswrapper[4713]: I0126 15:58:25.677635 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99pft\" (UniqueName: \"kubernetes.io/projected/edbeb019-b62a-41e2-8af4-63acbe2e0adb-kube-api-access-99pft\") pod \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " Jan 26 15:58:25 crc kubenswrapper[4713]: I0126 15:58:25.677811 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-scripts\") pod \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " Jan 26 15:58:25 crc kubenswrapper[4713]: I0126 15:58:25.677939 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-config-data\") pod \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " Jan 26 15:58:25 crc kubenswrapper[4713]: I0126 15:58:25.678068 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-combined-ca-bundle\") pod \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\" (UID: \"edbeb019-b62a-41e2-8af4-63acbe2e0adb\") " Jan 26 15:58:25 crc kubenswrapper[4713]: I0126 15:58:25.685437 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edbeb019-b62a-41e2-8af4-63acbe2e0adb-kube-api-access-99pft" (OuterVolumeSpecName: "kube-api-access-99pft") pod "edbeb019-b62a-41e2-8af4-63acbe2e0adb" (UID: "edbeb019-b62a-41e2-8af4-63acbe2e0adb"). InnerVolumeSpecName "kube-api-access-99pft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:25 crc kubenswrapper[4713]: I0126 15:58:25.685672 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-scripts" (OuterVolumeSpecName: "scripts") pod "edbeb019-b62a-41e2-8af4-63acbe2e0adb" (UID: "edbeb019-b62a-41e2-8af4-63acbe2e0adb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:25 crc kubenswrapper[4713]: I0126 15:58:25.726242 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edbeb019-b62a-41e2-8af4-63acbe2e0adb" (UID: "edbeb019-b62a-41e2-8af4-63acbe2e0adb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:25 crc kubenswrapper[4713]: I0126 15:58:25.726768 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-config-data" (OuterVolumeSpecName: "config-data") pod "edbeb019-b62a-41e2-8af4-63acbe2e0adb" (UID: "edbeb019-b62a-41e2-8af4-63acbe2e0adb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:25 crc kubenswrapper[4713]: I0126 15:58:25.782837 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:25 crc kubenswrapper[4713]: I0126 15:58:25.782954 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99pft\" (UniqueName: \"kubernetes.io/projected/edbeb019-b62a-41e2-8af4-63acbe2e0adb-kube-api-access-99pft\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:25 crc kubenswrapper[4713]: I0126 15:58:25.782987 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:25 crc kubenswrapper[4713]: I0126 15:58:25.783012 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbeb019-b62a-41e2-8af4-63acbe2e0adb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.069104 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-254xv" event={"ID":"edbeb019-b62a-41e2-8af4-63acbe2e0adb","Type":"ContainerDied","Data":"29fde5a983950a2f88ab61dd3b6f0b7f5186bba0a45380a53e87f686239fcff4"} Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.069506 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29fde5a983950a2f88ab61dd3b6f0b7f5186bba0a45380a53e87f686239fcff4" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.069158 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-254xv" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.173095 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 15:58:26 crc kubenswrapper[4713]: E0126 15:58:26.173623 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3c5546-2325-43de-882f-f5e9460ff920" containerName="registry-server" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.173648 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3c5546-2325-43de-882f-f5e9460ff920" containerName="registry-server" Jan 26 15:58:26 crc kubenswrapper[4713]: E0126 15:58:26.173672 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3c5546-2325-43de-882f-f5e9460ff920" containerName="extract-utilities" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.173680 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3c5546-2325-43de-882f-f5e9460ff920" containerName="extract-utilities" Jan 26 15:58:26 crc kubenswrapper[4713]: E0126 15:58:26.173709 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edbeb019-b62a-41e2-8af4-63acbe2e0adb" containerName="nova-cell0-conductor-db-sync" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.173717 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="edbeb019-b62a-41e2-8af4-63acbe2e0adb" containerName="nova-cell0-conductor-db-sync" Jan 26 15:58:26 crc kubenswrapper[4713]: E0126 15:58:26.173734 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3c5546-2325-43de-882f-f5e9460ff920" containerName="extract-content" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.173741 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3c5546-2325-43de-882f-f5e9460ff920" containerName="extract-content" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.173963 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="edbeb019-b62a-41e2-8af4-63acbe2e0adb" containerName="nova-cell0-conductor-db-sync" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.173992 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a3c5546-2325-43de-882f-f5e9460ff920" containerName="registry-server" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.174754 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.177318 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.178116 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pnnhw" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.188628 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.191626 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8acd4ad8-e9b7-4f39-9db6-f7139861e1c3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8acd4ad8-e9b7-4f39-9db6-f7139861e1c3\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.191824 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acd4ad8-e9b7-4f39-9db6-f7139861e1c3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8acd4ad8-e9b7-4f39-9db6-f7139861e1c3\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.192081 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr7h9\" (UniqueName: \"kubernetes.io/projected/8acd4ad8-e9b7-4f39-9db6-f7139861e1c3-kube-api-access-lr7h9\") pod \"nova-cell0-conductor-0\" (UID: \"8acd4ad8-e9b7-4f39-9db6-f7139861e1c3\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.293577 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8acd4ad8-e9b7-4f39-9db6-f7139861e1c3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8acd4ad8-e9b7-4f39-9db6-f7139861e1c3\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.293957 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acd4ad8-e9b7-4f39-9db6-f7139861e1c3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8acd4ad8-e9b7-4f39-9db6-f7139861e1c3\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.294210 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr7h9\" (UniqueName: \"kubernetes.io/projected/8acd4ad8-e9b7-4f39-9db6-f7139861e1c3-kube-api-access-lr7h9\") pod \"nova-cell0-conductor-0\" (UID: \"8acd4ad8-e9b7-4f39-9db6-f7139861e1c3\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.298097 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8acd4ad8-e9b7-4f39-9db6-f7139861e1c3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8acd4ad8-e9b7-4f39-9db6-f7139861e1c3\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.306196 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acd4ad8-e9b7-4f39-9db6-f7139861e1c3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8acd4ad8-e9b7-4f39-9db6-f7139861e1c3\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.311673 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr7h9\" (UniqueName: \"kubernetes.io/projected/8acd4ad8-e9b7-4f39-9db6-f7139861e1c3-kube-api-access-lr7h9\") pod \"nova-cell0-conductor-0\" (UID: \"8acd4ad8-e9b7-4f39-9db6-f7139861e1c3\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:58:26 crc kubenswrapper[4713]: I0126 15:58:26.493327 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 15:58:27 crc kubenswrapper[4713]: I0126 15:58:27.034350 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 15:58:27 crc kubenswrapper[4713]: I0126 15:58:27.098112 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8acd4ad8-e9b7-4f39-9db6-f7139861e1c3","Type":"ContainerStarted","Data":"4f3c9040af45de65a4a23561f235bfbe0e0d0db900b1a36495ad5a130df93cce"} Jan 26 15:58:28 crc kubenswrapper[4713]: I0126 15:58:28.114442 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8acd4ad8-e9b7-4f39-9db6-f7139861e1c3","Type":"ContainerStarted","Data":"df62c92cf2ff5f2cc1ce619c96ced1494c817ef0ae54ee56eaae00890761dd5d"} Jan 26 15:58:28 crc kubenswrapper[4713]: I0126 15:58:28.115071 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 15:58:28 crc kubenswrapper[4713]: I0126 15:58:28.138529 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.138489182 podStartE2EDuration="2.138489182s" podCreationTimestamp="2026-01-26 15:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:58:28.132224886 +0000 UTC m=+1483.269242121" watchObservedRunningTime="2026-01-26 15:58:28.138489182 +0000 UTC m=+1483.275506417" Jan 26 15:58:31 crc kubenswrapper[4713]: I0126 15:58:31.482122 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:58:31 crc kubenswrapper[4713]: I0126 15:58:31.482935 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="ceilometer-central-agent" containerID="cri-o://318219cc0f3abf999d7b43fcf00f8a1c4c661477431eb0a3d48e6ee55b5d5678" gracePeriod=30 Jan 26 15:58:31 crc kubenswrapper[4713]: I0126 15:58:31.482973 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="proxy-httpd" containerID="cri-o://f3df10ff675db38d6526397c347046fb3c12f659a4c9093c772a3a33430cde1c" gracePeriod=30 Jan 26 15:58:31 crc kubenswrapper[4713]: I0126 15:58:31.482985 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="sg-core" containerID="cri-o://7f4b333247ef3efbc70715c0d23fd83ba5ea65b6f0973a50c5509d4406c28014" gracePeriod=30 Jan 26 15:58:31 crc kubenswrapper[4713]: I0126 15:58:31.483076 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="ceilometer-notification-agent" containerID="cri-o://fb0cb874724cfa300aec73cebebbd6c69a1d2ab46e9f103e2b9594e59b0ae957" gracePeriod=30 Jan 26 15:58:31 crc kubenswrapper[4713]: I0126 15:58:31.492092 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.208:3000/\": EOF" Jan 26 15:58:32 crc kubenswrapper[4713]: I0126 15:58:32.173121 4713 generic.go:334] "Generic (PLEG): container finished" podID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerID="f3df10ff675db38d6526397c347046fb3c12f659a4c9093c772a3a33430cde1c" exitCode=0 Jan 26 15:58:32 crc kubenswrapper[4713]: I0126 15:58:32.173321 4713 generic.go:334] "Generic (PLEG): container finished" podID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerID="7f4b333247ef3efbc70715c0d23fd83ba5ea65b6f0973a50c5509d4406c28014" exitCode=2 Jan 26 15:58:32 crc kubenswrapper[4713]: I0126 15:58:32.173331 4713 generic.go:334] "Generic (PLEG): container finished" podID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerID="318219cc0f3abf999d7b43fcf00f8a1c4c661477431eb0a3d48e6ee55b5d5678" exitCode=0 Jan 26 15:58:32 crc kubenswrapper[4713]: I0126 15:58:32.173203 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6a38ac9-97bc-481b-8e57-f799efea80a2","Type":"ContainerDied","Data":"f3df10ff675db38d6526397c347046fb3c12f659a4c9093c772a3a33430cde1c"} Jan 26 15:58:32 crc kubenswrapper[4713]: I0126 15:58:32.173379 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6a38ac9-97bc-481b-8e57-f799efea80a2","Type":"ContainerDied","Data":"7f4b333247ef3efbc70715c0d23fd83ba5ea65b6f0973a50c5509d4406c28014"} Jan 26 15:58:32 crc kubenswrapper[4713]: I0126 15:58:32.173394 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6a38ac9-97bc-481b-8e57-f799efea80a2","Type":"ContainerDied","Data":"318219cc0f3abf999d7b43fcf00f8a1c4c661477431eb0a3d48e6ee55b5d5678"} Jan 26 15:58:34 crc kubenswrapper[4713]: I0126 15:58:34.117438 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.208:3000/\": dial tcp 10.217.0.208:3000: connect: connection refused" Jan 26 15:58:36 crc kubenswrapper[4713]: I0126 15:58:36.526778 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.017713 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.051963 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6a38ac9-97bc-481b-8e57-f799efea80a2-run-httpd\") pod \"d6a38ac9-97bc-481b-8e57-f799efea80a2\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.052131 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-combined-ca-bundle\") pod \"d6a38ac9-97bc-481b-8e57-f799efea80a2\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.052252 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-sg-core-conf-yaml\") pod \"d6a38ac9-97bc-481b-8e57-f799efea80a2\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.052275 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-scripts\") pod \"d6a38ac9-97bc-481b-8e57-f799efea80a2\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.052354 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkhx6\" (UniqueName: \"kubernetes.io/projected/d6a38ac9-97bc-481b-8e57-f799efea80a2-kube-api-access-qkhx6\") pod \"d6a38ac9-97bc-481b-8e57-f799efea80a2\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.052392 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6a38ac9-97bc-481b-8e57-f799efea80a2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d6a38ac9-97bc-481b-8e57-f799efea80a2" (UID: "d6a38ac9-97bc-481b-8e57-f799efea80a2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.052416 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6a38ac9-97bc-481b-8e57-f799efea80a2-log-httpd\") pod \"d6a38ac9-97bc-481b-8e57-f799efea80a2\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.052484 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-config-data\") pod \"d6a38ac9-97bc-481b-8e57-f799efea80a2\" (UID: \"d6a38ac9-97bc-481b-8e57-f799efea80a2\") " Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.052726 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6a38ac9-97bc-481b-8e57-f799efea80a2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d6a38ac9-97bc-481b-8e57-f799efea80a2" (UID: "d6a38ac9-97bc-481b-8e57-f799efea80a2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.053394 4713 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6a38ac9-97bc-481b-8e57-f799efea80a2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.053420 4713 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6a38ac9-97bc-481b-8e57-f799efea80a2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.061113 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-scripts" (OuterVolumeSpecName: "scripts") pod "d6a38ac9-97bc-481b-8e57-f799efea80a2" (UID: "d6a38ac9-97bc-481b-8e57-f799efea80a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.063583 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a38ac9-97bc-481b-8e57-f799efea80a2-kube-api-access-qkhx6" (OuterVolumeSpecName: "kube-api-access-qkhx6") pod "d6a38ac9-97bc-481b-8e57-f799efea80a2" (UID: "d6a38ac9-97bc-481b-8e57-f799efea80a2"). InnerVolumeSpecName "kube-api-access-qkhx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.106041 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-xjrxt"] Jan 26 15:58:37 crc kubenswrapper[4713]: E0126 15:58:37.106631 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="proxy-httpd" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.106649 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="proxy-httpd" Jan 26 15:58:37 crc kubenswrapper[4713]: E0126 15:58:37.106659 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="ceilometer-central-agent" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.106669 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="ceilometer-central-agent" Jan 26 15:58:37 crc kubenswrapper[4713]: E0126 15:58:37.106678 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="ceilometer-notification-agent" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.106684 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="ceilometer-notification-agent" Jan 26 15:58:37 crc kubenswrapper[4713]: E0126 15:58:37.106713 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="sg-core" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.106719 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="sg-core" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.106902 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="proxy-httpd" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.106921 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="ceilometer-notification-agent" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.106936 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="sg-core" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.106944 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerName="ceilometer-central-agent" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.107710 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.110321 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.110694 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.111030 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d6a38ac9-97bc-481b-8e57-f799efea80a2" (UID: "d6a38ac9-97bc-481b-8e57-f799efea80a2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.129432 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xjrxt"] Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.163504 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xjrxt\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.163650 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4pb2\" (UniqueName: \"kubernetes.io/projected/c1ce4cf0-e8a1-4475-a238-667b42cb429b-kube-api-access-x4pb2\") pod \"nova-cell0-cell-mapping-xjrxt\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.163851 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-scripts\") pod \"nova-cell0-cell-mapping-xjrxt\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.163932 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-config-data\") pod \"nova-cell0-cell-mapping-xjrxt\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.164159 4713 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.164171 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.164181 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkhx6\" (UniqueName: \"kubernetes.io/projected/d6a38ac9-97bc-481b-8e57-f799efea80a2-kube-api-access-qkhx6\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.245282 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.246766 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.254453 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.260412 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-config-data" (OuterVolumeSpecName: "config-data") pod "d6a38ac9-97bc-481b-8e57-f799efea80a2" (UID: "d6a38ac9-97bc-481b-8e57-f799efea80a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.262970 4713 generic.go:334] "Generic (PLEG): container finished" podID="d6a38ac9-97bc-481b-8e57-f799efea80a2" containerID="fb0cb874724cfa300aec73cebebbd6c69a1d2ab46e9f103e2b9594e59b0ae957" exitCode=0 Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.263007 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6a38ac9-97bc-481b-8e57-f799efea80a2","Type":"ContainerDied","Data":"fb0cb874724cfa300aec73cebebbd6c69a1d2ab46e9f103e2b9594e59b0ae957"} Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.263032 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6a38ac9-97bc-481b-8e57-f799efea80a2","Type":"ContainerDied","Data":"26d1911858a9a79dcff8ce94f83ef350245f566784685bc2089ea1f098fef3d8"} Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.263052 4713 scope.go:117] "RemoveContainer" containerID="f3df10ff675db38d6526397c347046fb3c12f659a4c9093c772a3a33430cde1c" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.263196 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.268267 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-scripts\") pod \"nova-cell0-cell-mapping-xjrxt\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.268322 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-config-data\") pod \"nova-cell0-cell-mapping-xjrxt\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.272818 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.275587 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xjrxt\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.275691 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4pb2\" (UniqueName: \"kubernetes.io/projected/c1ce4cf0-e8a1-4475-a238-667b42cb429b-kube-api-access-x4pb2\") pod \"nova-cell0-cell-mapping-xjrxt\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.275891 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.290317 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-scripts\") pod \"nova-cell0-cell-mapping-xjrxt\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.299218 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-config-data\") pod \"nova-cell0-cell-mapping-xjrxt\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.308005 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xjrxt\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.319469 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4pb2\" (UniqueName: \"kubernetes.io/projected/c1ce4cf0-e8a1-4475-a238-667b42cb429b-kube-api-access-x4pb2\") pod \"nova-cell0-cell-mapping-xjrxt\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.319647 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6a38ac9-97bc-481b-8e57-f799efea80a2" (UID: "d6a38ac9-97bc-481b-8e57-f799efea80a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.324221 4713 scope.go:117] "RemoveContainer" containerID="7f4b333247ef3efbc70715c0d23fd83ba5ea65b6f0973a50c5509d4406c28014" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.348970 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.353050 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.356253 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.376796 4713 scope.go:117] "RemoveContainer" containerID="fb0cb874724cfa300aec73cebebbd6c69a1d2ab46e9f103e2b9594e59b0ae957" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.377568 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.378908 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkm7w\" (UniqueName: \"kubernetes.io/projected/82b37183-5b76-4014-aaad-d8356670e767-kube-api-access-zkm7w\") pod \"nova-cell1-novncproxy-0\" (UID: \"82b37183-5b76-4014-aaad-d8356670e767\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.379000 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82b37183-5b76-4014-aaad-d8356670e767-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"82b37183-5b76-4014-aaad-d8356670e767\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.379296 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b37183-5b76-4014-aaad-d8356670e767-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"82b37183-5b76-4014-aaad-d8356670e767\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.379508 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6a38ac9-97bc-481b-8e57-f799efea80a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.422887 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.424792 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.427580 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.438163 4713 scope.go:117] "RemoveContainer" containerID="318219cc0f3abf999d7b43fcf00f8a1c4c661477431eb0a3d48e6ee55b5d5678" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.481259 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.483121 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.486429 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkm7w\" (UniqueName: \"kubernetes.io/projected/82b37183-5b76-4014-aaad-d8356670e767-kube-api-access-zkm7w\") pod \"nova-cell1-novncproxy-0\" (UID: \"82b37183-5b76-4014-aaad-d8356670e767\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.486476 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82b37183-5b76-4014-aaad-d8356670e767-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"82b37183-5b76-4014-aaad-d8356670e767\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.486531 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e1d7724-b8b1-4865-ad1a-dba30ce76123-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " pod="openstack/nova-api-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.486605 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e1d7724-b8b1-4865-ad1a-dba30ce76123-logs\") pod \"nova-api-0\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " pod="openstack/nova-api-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.486644 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46bv5\" (UniqueName: \"kubernetes.io/projected/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-kube-api-access-46bv5\") pod \"nova-scheduler-0\" (UID: \"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.486671 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e1d7724-b8b1-4865-ad1a-dba30ce76123-config-data\") pod \"nova-api-0\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " pod="openstack/nova-api-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.486692 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b37183-5b76-4014-aaad-d8356670e767-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"82b37183-5b76-4014-aaad-d8356670e767\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.486712 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25z5h\" (UniqueName: \"kubernetes.io/projected/9e1d7724-b8b1-4865-ad1a-dba30ce76123-kube-api-access-25z5h\") pod \"nova-api-0\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " pod="openstack/nova-api-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.486773 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-config-data\") pod \"nova-scheduler-0\" (UID: \"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.486903 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.487114 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.496564 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.498592 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.501509 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82b37183-5b76-4014-aaad-d8356670e767-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"82b37183-5b76-4014-aaad-d8356670e767\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.506802 4713 scope.go:117] "RemoveContainer" containerID="f3df10ff675db38d6526397c347046fb3c12f659a4c9093c772a3a33430cde1c" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.515396 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:58:37 crc kubenswrapper[4713]: E0126 15:58:37.515952 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3df10ff675db38d6526397c347046fb3c12f659a4c9093c772a3a33430cde1c\": container with ID starting with f3df10ff675db38d6526397c347046fb3c12f659a4c9093c772a3a33430cde1c not found: ID does not exist" containerID="f3df10ff675db38d6526397c347046fb3c12f659a4c9093c772a3a33430cde1c" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.516047 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3df10ff675db38d6526397c347046fb3c12f659a4c9093c772a3a33430cde1c"} err="failed to get container status \"f3df10ff675db38d6526397c347046fb3c12f659a4c9093c772a3a33430cde1c\": rpc error: code = NotFound desc = could not find container \"f3df10ff675db38d6526397c347046fb3c12f659a4c9093c772a3a33430cde1c\": container with ID starting with f3df10ff675db38d6526397c347046fb3c12f659a4c9093c772a3a33430cde1c not found: ID does not exist" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.516128 4713 scope.go:117] "RemoveContainer" containerID="7f4b333247ef3efbc70715c0d23fd83ba5ea65b6f0973a50c5509d4406c28014" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.516321 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkm7w\" (UniqueName: \"kubernetes.io/projected/82b37183-5b76-4014-aaad-d8356670e767-kube-api-access-zkm7w\") pod \"nova-cell1-novncproxy-0\" (UID: \"82b37183-5b76-4014-aaad-d8356670e767\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:58:37 crc kubenswrapper[4713]: E0126 15:58:37.522580 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f4b333247ef3efbc70715c0d23fd83ba5ea65b6f0973a50c5509d4406c28014\": container with ID starting with 7f4b333247ef3efbc70715c0d23fd83ba5ea65b6f0973a50c5509d4406c28014 not found: ID does not exist" containerID="7f4b333247ef3efbc70715c0d23fd83ba5ea65b6f0973a50c5509d4406c28014" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.522658 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f4b333247ef3efbc70715c0d23fd83ba5ea65b6f0973a50c5509d4406c28014"} err="failed to get container status \"7f4b333247ef3efbc70715c0d23fd83ba5ea65b6f0973a50c5509d4406c28014\": rpc error: code = NotFound desc = could not find container \"7f4b333247ef3efbc70715c0d23fd83ba5ea65b6f0973a50c5509d4406c28014\": container with ID starting with 7f4b333247ef3efbc70715c0d23fd83ba5ea65b6f0973a50c5509d4406c28014 not found: ID does not exist" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.522706 4713 scope.go:117] "RemoveContainer" containerID="fb0cb874724cfa300aec73cebebbd6c69a1d2ab46e9f103e2b9594e59b0ae957" Jan 26 15:58:37 crc kubenswrapper[4713]: E0126 15:58:37.523775 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb0cb874724cfa300aec73cebebbd6c69a1d2ab46e9f103e2b9594e59b0ae957\": container with ID starting with fb0cb874724cfa300aec73cebebbd6c69a1d2ab46e9f103e2b9594e59b0ae957 not found: ID does not exist" containerID="fb0cb874724cfa300aec73cebebbd6c69a1d2ab46e9f103e2b9594e59b0ae957" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.523812 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb0cb874724cfa300aec73cebebbd6c69a1d2ab46e9f103e2b9594e59b0ae957"} err="failed to get container status \"fb0cb874724cfa300aec73cebebbd6c69a1d2ab46e9f103e2b9594e59b0ae957\": rpc error: code = NotFound desc = could not find container \"fb0cb874724cfa300aec73cebebbd6c69a1d2ab46e9f103e2b9594e59b0ae957\": container with ID starting with fb0cb874724cfa300aec73cebebbd6c69a1d2ab46e9f103e2b9594e59b0ae957 not found: ID does not exist" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.523842 4713 scope.go:117] "RemoveContainer" containerID="318219cc0f3abf999d7b43fcf00f8a1c4c661477431eb0a3d48e6ee55b5d5678" Jan 26 15:58:37 crc kubenswrapper[4713]: E0126 15:58:37.524701 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"318219cc0f3abf999d7b43fcf00f8a1c4c661477431eb0a3d48e6ee55b5d5678\": container with ID starting with 318219cc0f3abf999d7b43fcf00f8a1c4c661477431eb0a3d48e6ee55b5d5678 not found: ID does not exist" containerID="318219cc0f3abf999d7b43fcf00f8a1c4c661477431eb0a3d48e6ee55b5d5678" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.524755 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318219cc0f3abf999d7b43fcf00f8a1c4c661477431eb0a3d48e6ee55b5d5678"} err="failed to get container status \"318219cc0f3abf999d7b43fcf00f8a1c4c661477431eb0a3d48e6ee55b5d5678\": rpc error: code = NotFound desc = could not find container \"318219cc0f3abf999d7b43fcf00f8a1c4c661477431eb0a3d48e6ee55b5d5678\": container with ID starting with 318219cc0f3abf999d7b43fcf00f8a1c4c661477431eb0a3d48e6ee55b5d5678 not found: ID does not exist" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.524823 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b37183-5b76-4014-aaad-d8356670e767-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"82b37183-5b76-4014-aaad-d8356670e767\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.563967 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cd565959-j6929"] Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.567168 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.576778 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.588618 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-config-data\") pod \"nova-metadata-0\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " pod="openstack/nova-metadata-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.588689 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-config-data\") pod \"nova-scheduler-0\" (UID: \"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.588711 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-logs\") pod \"nova-metadata-0\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " pod="openstack/nova-metadata-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.588739 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.588808 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wxnp\" (UniqueName: \"kubernetes.io/projected/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-kube-api-access-5wxnp\") pod \"nova-metadata-0\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " pod="openstack/nova-metadata-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.588848 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e1d7724-b8b1-4865-ad1a-dba30ce76123-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " pod="openstack/nova-api-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.588912 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e1d7724-b8b1-4865-ad1a-dba30ce76123-logs\") pod \"nova-api-0\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " pod="openstack/nova-api-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.588955 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46bv5\" (UniqueName: \"kubernetes.io/projected/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-kube-api-access-46bv5\") pod \"nova-scheduler-0\" (UID: \"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.588982 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e1d7724-b8b1-4865-ad1a-dba30ce76123-config-data\") pod \"nova-api-0\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " pod="openstack/nova-api-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.589014 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " pod="openstack/nova-metadata-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.589037 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25z5h\" (UniqueName: \"kubernetes.io/projected/9e1d7724-b8b1-4865-ad1a-dba30ce76123-kube-api-access-25z5h\") pod \"nova-api-0\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " pod="openstack/nova-api-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.590112 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e1d7724-b8b1-4865-ad1a-dba30ce76123-logs\") pod \"nova-api-0\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " pod="openstack/nova-api-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.613493 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-config-data\") pod \"nova-scheduler-0\" (UID: \"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.615098 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-j6929"] Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.616997 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e1d7724-b8b1-4865-ad1a-dba30ce76123-config-data\") pod \"nova-api-0\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " pod="openstack/nova-api-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.617353 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e1d7724-b8b1-4865-ad1a-dba30ce76123-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " pod="openstack/nova-api-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.624761 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25z5h\" (UniqueName: \"kubernetes.io/projected/9e1d7724-b8b1-4865-ad1a-dba30ce76123-kube-api-access-25z5h\") pod \"nova-api-0\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " pod="openstack/nova-api-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.627852 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.631093 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46bv5\" (UniqueName: \"kubernetes.io/projected/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-kube-api-access-46bv5\") pod \"nova-scheduler-0\" (UID: \"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.691729 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-dns-svc\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.691820 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wxnp\" (UniqueName: \"kubernetes.io/projected/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-kube-api-access-5wxnp\") pod \"nova-metadata-0\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " pod="openstack/nova-metadata-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.691862 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-config\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.691888 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.691922 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.691996 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " pod="openstack/nova-metadata-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.692013 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k27gv\" (UniqueName: \"kubernetes.io/projected/e0176622-8842-4855-8962-ad88abbdb1e5-kube-api-access-k27gv\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.692046 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-config-data\") pod \"nova-metadata-0\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " pod="openstack/nova-metadata-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.692087 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.692112 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-logs\") pod \"nova-metadata-0\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " pod="openstack/nova-metadata-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.693097 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-logs\") pod \"nova-metadata-0\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " pod="openstack/nova-metadata-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.698540 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.701025 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " pod="openstack/nova-metadata-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.701675 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-config-data\") pod \"nova-metadata-0\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " pod="openstack/nova-metadata-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.721958 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wxnp\" (UniqueName: \"kubernetes.io/projected/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-kube-api-access-5wxnp\") pod \"nova-metadata-0\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " pod="openstack/nova-metadata-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.771674 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.794436 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-dns-svc\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.794548 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-config\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.794581 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.794627 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.794734 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k27gv\" (UniqueName: \"kubernetes.io/projected/e0176622-8842-4855-8962-ad88abbdb1e5-kube-api-access-k27gv\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.794805 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.806432 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.807211 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.809890 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-config\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.811449 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-dns-svc\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.813663 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.815277 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k27gv\" (UniqueName: \"kubernetes.io/projected/e0176622-8842-4855-8962-ad88abbdb1e5-kube-api-access-k27gv\") pod \"dnsmasq-dns-78cd565959-j6929\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.820491 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.844631 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.891957 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:58:37 crc kubenswrapper[4713]: I0126 15:58:37.953109 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.056596 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.059521 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.070096 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.070281 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.095106 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fda03791-ed50-4db3-ab38-8bf1ec8d607d-log-httpd\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.095510 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.095579 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fda03791-ed50-4db3-ab38-8bf1ec8d607d-run-httpd\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.095596 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-scripts\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.095669 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.095691 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52lwf\" (UniqueName: \"kubernetes.io/projected/fda03791-ed50-4db3-ab38-8bf1ec8d607d-kube-api-access-52lwf\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.095716 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-config-data\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.134148 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.197743 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fda03791-ed50-4db3-ab38-8bf1ec8d607d-log-httpd\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.198048 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.198187 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fda03791-ed50-4db3-ab38-8bf1ec8d607d-run-httpd\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.198354 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-scripts\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.198569 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.198660 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52lwf\" (UniqueName: \"kubernetes.io/projected/fda03791-ed50-4db3-ab38-8bf1ec8d607d-kube-api-access-52lwf\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.198746 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-config-data\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.202509 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fda03791-ed50-4db3-ab38-8bf1ec8d607d-run-httpd\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.211004 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fda03791-ed50-4db3-ab38-8bf1ec8d607d-log-httpd\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.230810 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-scripts\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.232740 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.239190 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52lwf\" (UniqueName: \"kubernetes.io/projected/fda03791-ed50-4db3-ab38-8bf1ec8d607d-kube-api-access-52lwf\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.243145 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-config-data\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.247838 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.328793 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xjrxt"] Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.430732 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.592341 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.848916 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7xzpc"] Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.850958 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.870968 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7xzpc"] Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.873922 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.874140 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.980769 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-scripts\") pod \"nova-cell1-conductor-db-sync-7xzpc\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.980829 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkrbq\" (UniqueName: \"kubernetes.io/projected/c1ab2adc-59f5-4803-b758-0a88857830b0-kube-api-access-kkrbq\") pod \"nova-cell1-conductor-db-sync-7xzpc\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.980933 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7xzpc\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:38 crc kubenswrapper[4713]: I0126 15:58:38.981136 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-config-data\") pod \"nova-cell1-conductor-db-sync-7xzpc\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.083758 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-config-data\") pod \"nova-cell1-conductor-db-sync-7xzpc\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.084182 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-scripts\") pod \"nova-cell1-conductor-db-sync-7xzpc\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.084230 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkrbq\" (UniqueName: \"kubernetes.io/projected/c1ab2adc-59f5-4803-b758-0a88857830b0-kube-api-access-kkrbq\") pod \"nova-cell1-conductor-db-sync-7xzpc\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.084355 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7xzpc\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.101292 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7xzpc\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.102207 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-scripts\") pod \"nova-cell1-conductor-db-sync-7xzpc\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.119093 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-config-data\") pod \"nova-cell1-conductor-db-sync-7xzpc\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.145014 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkrbq\" (UniqueName: \"kubernetes.io/projected/c1ab2adc-59f5-4803-b758-0a88857830b0-kube-api-access-kkrbq\") pod \"nova-cell1-conductor-db-sync-7xzpc\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.263718 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.282232 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.315430 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-j6929"] Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.359497 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.375413 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82b37183-5b76-4014-aaad-d8356670e767","Type":"ContainerStarted","Data":"2b1e3ac7087227438873d8b41719392655491dd2edc70180b7ebbbccfbedbee6"} Jan 26 15:58:39 crc kubenswrapper[4713]: W0126 15:58:39.375535 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e1d7724_b8b1_4865_ad1a_dba30ce76123.slice/crio-c0a2ebd1d583063d53706bcf28bcb7ecc7f9013b6828d5df280d7463e87c9bed WatchSource:0}: Error finding container c0a2ebd1d583063d53706bcf28bcb7ecc7f9013b6828d5df280d7463e87c9bed: Status 404 returned error can't find the container with id c0a2ebd1d583063d53706bcf28bcb7ecc7f9013b6828d5df280d7463e87c9bed Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.408534 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xjrxt" event={"ID":"c1ce4cf0-e8a1-4475-a238-667b42cb429b","Type":"ContainerStarted","Data":"e7f8ca8d156e794fc997d7c9ecfe0aafb1105b85b44dfe366a08b726a95018ef"} Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.408584 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xjrxt" event={"ID":"c1ce4cf0-e8a1-4475-a238-667b42cb429b","Type":"ContainerStarted","Data":"6b1cdfdf48e500aefe15560b6561fcb99fa1a0b57592b7e75803f075ba50d33b"} Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.417710 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:58:39 crc kubenswrapper[4713]: I0126 15:58:39.466672 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-xjrxt" podStartSLOduration=2.466651148 podStartE2EDuration="2.466651148s" podCreationTimestamp="2026-01-26 15:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:58:39.449716232 +0000 UTC m=+1494.586733467" watchObservedRunningTime="2026-01-26 15:58:39.466651148 +0000 UTC m=+1494.603668383" Jan 26 15:58:40 crc kubenswrapper[4713]: I0126 15:58:40.001977 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6a38ac9-97bc-481b-8e57-f799efea80a2" path="/var/lib/kubelet/pods/d6a38ac9-97bc-481b-8e57-f799efea80a2/volumes" Jan 26 15:58:40 crc kubenswrapper[4713]: I0126 15:58:40.155128 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:58:40 crc kubenswrapper[4713]: I0126 15:58:40.374605 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7xzpc"] Jan 26 15:58:40 crc kubenswrapper[4713]: I0126 15:58:40.440653 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fda03791-ed50-4db3-ab38-8bf1ec8d607d","Type":"ContainerStarted","Data":"e015eb261bbfe6266a78f838f8435a19a0c77f0153869b9a9440dc6ec98fb024"} Jan 26 15:58:40 crc kubenswrapper[4713]: I0126 15:58:40.463572 4713 generic.go:334] "Generic (PLEG): container finished" podID="e0176622-8842-4855-8962-ad88abbdb1e5" containerID="8d55a96d738dba8bfc0b9f82fda7d4b09378a72f724a427099407b82278e6945" exitCode=0 Jan 26 15:58:40 crc kubenswrapper[4713]: I0126 15:58:40.463667 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-j6929" event={"ID":"e0176622-8842-4855-8962-ad88abbdb1e5","Type":"ContainerDied","Data":"8d55a96d738dba8bfc0b9f82fda7d4b09378a72f724a427099407b82278e6945"} Jan 26 15:58:40 crc kubenswrapper[4713]: I0126 15:58:40.463697 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-j6929" event={"ID":"e0176622-8842-4855-8962-ad88abbdb1e5","Type":"ContainerStarted","Data":"6d45a93ea1d5c4aab34f2bfc83a25fb47c22a73702ef3c28d301c6c3c671481b"} Jan 26 15:58:40 crc kubenswrapper[4713]: I0126 15:58:40.487530 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa","Type":"ContainerStarted","Data":"3785918717c90f79cd25467bd65f0f157e9424702f1805052f826fd5c816e85a"} Jan 26 15:58:40 crc kubenswrapper[4713]: I0126 15:58:40.494917 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1d7724-b8b1-4865-ad1a-dba30ce76123","Type":"ContainerStarted","Data":"c0a2ebd1d583063d53706bcf28bcb7ecc7f9013b6828d5df280d7463e87c9bed"} Jan 26 15:58:40 crc kubenswrapper[4713]: I0126 15:58:40.510325 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229","Type":"ContainerStarted","Data":"5ababd947818e86a85dde41bfe1bc57d99a631f34cc561e8d2bc433bd7dccf97"} Jan 26 15:58:40 crc kubenswrapper[4713]: I0126 15:58:40.537734 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7xzpc" event={"ID":"c1ab2adc-59f5-4803-b758-0a88857830b0","Type":"ContainerStarted","Data":"7cc4a21cbaae61d6f088d100f80639675c14d6dc1a22dd1e5afa8f1cc4128952"} Jan 26 15:58:41 crc kubenswrapper[4713]: I0126 15:58:41.575638 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7xzpc" event={"ID":"c1ab2adc-59f5-4803-b758-0a88857830b0","Type":"ContainerStarted","Data":"bbf90d2cda31f005a60965d0273df86e11b2a78d6391634f41f11752905622b8"} Jan 26 15:58:41 crc kubenswrapper[4713]: I0126 15:58:41.588310 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fda03791-ed50-4db3-ab38-8bf1ec8d607d","Type":"ContainerStarted","Data":"d758fbd52089db58a182ef7c8b0dc5567ac3b57962ebcbfb8f8407df85611971"} Jan 26 15:58:41 crc kubenswrapper[4713]: I0126 15:58:41.589357 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:58:41 crc kubenswrapper[4713]: I0126 15:58:41.598927 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-j6929" event={"ID":"e0176622-8842-4855-8962-ad88abbdb1e5","Type":"ContainerStarted","Data":"5d18f621f5b0c9f8e399aa5ca23ef63a8992ab1457d582b892fa529aa822c1b4"} Jan 26 15:58:41 crc kubenswrapper[4713]: I0126 15:58:41.600134 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:41 crc kubenswrapper[4713]: I0126 15:58:41.618842 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7xzpc" podStartSLOduration=3.6188242280000003 podStartE2EDuration="3.618824228s" podCreationTimestamp="2026-01-26 15:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:58:41.60432108 +0000 UTC m=+1496.741338325" watchObservedRunningTime="2026-01-26 15:58:41.618824228 +0000 UTC m=+1496.755841453" Jan 26 15:58:41 crc kubenswrapper[4713]: I0126 15:58:41.620347 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:58:41 crc kubenswrapper[4713]: I0126 15:58:41.698901 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cd565959-j6929" podStartSLOduration=4.698876046 podStartE2EDuration="4.698876046s" podCreationTimestamp="2026-01-26 15:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:58:41.642991376 +0000 UTC m=+1496.780008611" watchObservedRunningTime="2026-01-26 15:58:41.698876046 +0000 UTC m=+1496.835893281" Jan 26 15:58:45 crc kubenswrapper[4713]: I0126 15:58:45.656972 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1d7724-b8b1-4865-ad1a-dba30ce76123","Type":"ContainerStarted","Data":"eb9ab08347c63aed6b9a8259e9e9b86139a9a0d55bf45bdc2325d88eb4d6df27"} Jan 26 15:58:45 crc kubenswrapper[4713]: I0126 15:58:45.658406 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1d7724-b8b1-4865-ad1a-dba30ce76123","Type":"ContainerStarted","Data":"96cc34764eb2231e9bd316f7b9fb0c856ba2c373b70a906fc8dabc2146ca2f29"} Jan 26 15:58:45 crc kubenswrapper[4713]: I0126 15:58:45.659305 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229","Type":"ContainerStarted","Data":"46768f605c2c316aa7a17408b7431ae8eb8dd67cee4e44fb1b5d9a26c2b99d97"} Jan 26 15:58:45 crc kubenswrapper[4713]: I0126 15:58:45.659338 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229","Type":"ContainerStarted","Data":"5c61cbf5a85a4a22484384850c0b2975a3c272c2dbbb6a4875d0a18cccee1d81"} Jan 26 15:58:45 crc kubenswrapper[4713]: I0126 15:58:45.659435 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c5dc8ecd-562c-4e2c-be7b-aaf2b088c229" containerName="nova-metadata-log" containerID="cri-o://5c61cbf5a85a4a22484384850c0b2975a3c272c2dbbb6a4875d0a18cccee1d81" gracePeriod=30 Jan 26 15:58:45 crc kubenswrapper[4713]: I0126 15:58:45.659541 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c5dc8ecd-562c-4e2c-be7b-aaf2b088c229" containerName="nova-metadata-metadata" containerID="cri-o://46768f605c2c316aa7a17408b7431ae8eb8dd67cee4e44fb1b5d9a26c2b99d97" gracePeriod=30 Jan 26 15:58:45 crc kubenswrapper[4713]: I0126 15:58:45.662583 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fda03791-ed50-4db3-ab38-8bf1ec8d607d","Type":"ContainerStarted","Data":"a2a4fdf09660907b6c5af96cc093ce8e78520a816cd5159b7426a081e1cc46d6"} Jan 26 15:58:45 crc kubenswrapper[4713]: I0126 15:58:45.676893 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82b37183-5b76-4014-aaad-d8356670e767","Type":"ContainerStarted","Data":"1b5cc266fc3d11f0ffd94d83cdc636d994503b59918464cc510c0cd269546c9b"} Jan 26 15:58:45 crc kubenswrapper[4713]: I0126 15:58:45.677008 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="82b37183-5b76-4014-aaad-d8356670e767" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://1b5cc266fc3d11f0ffd94d83cdc636d994503b59918464cc510c0cd269546c9b" gracePeriod=30 Jan 26 15:58:45 crc kubenswrapper[4713]: I0126 15:58:45.683890 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa","Type":"ContainerStarted","Data":"d958a16825a4122f3bbd66df879649b6df598aeccc12e5b3ac3ecf13c06a21ab"} Jan 26 15:58:45 crc kubenswrapper[4713]: I0126 15:58:45.699559 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.523036885 podStartE2EDuration="8.699543096s" podCreationTimestamp="2026-01-26 15:58:37 +0000 UTC" firstStartedPulling="2026-01-26 15:58:39.399545103 +0000 UTC m=+1494.536562328" lastFinishedPulling="2026-01-26 15:58:44.576051314 +0000 UTC m=+1499.713068539" observedRunningTime="2026-01-26 15:58:45.699448383 +0000 UTC m=+1500.836465618" watchObservedRunningTime="2026-01-26 15:58:45.699543096 +0000 UTC m=+1500.836560331" Jan 26 15:58:45 crc kubenswrapper[4713]: I0126 15:58:45.733183 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.556937146 podStartE2EDuration="8.73316555s" podCreationTimestamp="2026-01-26 15:58:37 +0000 UTC" firstStartedPulling="2026-01-26 15:58:39.396549419 +0000 UTC m=+1494.533566654" lastFinishedPulling="2026-01-26 15:58:44.572777823 +0000 UTC m=+1499.709795058" observedRunningTime="2026-01-26 15:58:45.726583775 +0000 UTC m=+1500.863601020" watchObservedRunningTime="2026-01-26 15:58:45.73316555 +0000 UTC m=+1500.870182785" Jan 26 15:58:45 crc kubenswrapper[4713]: I0126 15:58:45.783253 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.870085011 podStartE2EDuration="8.783232977s" podCreationTimestamp="2026-01-26 15:58:37 +0000 UTC" firstStartedPulling="2026-01-26 15:58:38.663052043 +0000 UTC m=+1493.800069278" lastFinishedPulling="2026-01-26 15:58:44.576200009 +0000 UTC m=+1499.713217244" observedRunningTime="2026-01-26 15:58:45.7765746 +0000 UTC m=+1500.913591835" watchObservedRunningTime="2026-01-26 15:58:45.783232977 +0000 UTC m=+1500.920250212" Jan 26 15:58:45 crc kubenswrapper[4713]: I0126 15:58:45.833541 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.650953947 podStartE2EDuration="8.833518419s" podCreationTimestamp="2026-01-26 15:58:37 +0000 UTC" firstStartedPulling="2026-01-26 15:58:39.423126675 +0000 UTC m=+1494.560143910" lastFinishedPulling="2026-01-26 15:58:44.605691147 +0000 UTC m=+1499.742708382" observedRunningTime="2026-01-26 15:58:45.82072149 +0000 UTC m=+1500.957738725" watchObservedRunningTime="2026-01-26 15:58:45.833518419 +0000 UTC m=+1500.970535654" Jan 26 15:58:46 crc kubenswrapper[4713]: I0126 15:58:46.704273 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fda03791-ed50-4db3-ab38-8bf1ec8d607d","Type":"ContainerStarted","Data":"fcc38ef85e06c17c154c2831df886a51dccf1efa2f92bba24af7abe9404108a1"} Jan 26 15:58:46 crc kubenswrapper[4713]: I0126 15:58:46.714306 4713 generic.go:334] "Generic (PLEG): container finished" podID="c5dc8ecd-562c-4e2c-be7b-aaf2b088c229" containerID="5c61cbf5a85a4a22484384850c0b2975a3c272c2dbbb6a4875d0a18cccee1d81" exitCode=143 Jan 26 15:58:46 crc kubenswrapper[4713]: I0126 15:58:46.714428 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229","Type":"ContainerDied","Data":"5c61cbf5a85a4a22484384850c0b2975a3c272c2dbbb6a4875d0a18cccee1d81"} Jan 26 15:58:47 crc kubenswrapper[4713]: I0126 15:58:47.578318 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:58:47 crc kubenswrapper[4713]: I0126 15:58:47.699668 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 15:58:47 crc kubenswrapper[4713]: I0126 15:58:47.700024 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 15:58:47 crc kubenswrapper[4713]: I0126 15:58:47.730857 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fda03791-ed50-4db3-ab38-8bf1ec8d607d","Type":"ContainerStarted","Data":"4ea2a33c54e8a47adebd1a7af432e934fc592c21480ed0afb4e9b361a8c5e599"} Jan 26 15:58:47 crc kubenswrapper[4713]: I0126 15:58:47.730938 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:58:47 crc kubenswrapper[4713]: I0126 15:58:47.752555 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.729269458 podStartE2EDuration="10.75253705s" podCreationTimestamp="2026-01-26 15:58:37 +0000 UTC" firstStartedPulling="2026-01-26 15:58:40.179948416 +0000 UTC m=+1495.316965641" lastFinishedPulling="2026-01-26 15:58:47.203215998 +0000 UTC m=+1502.340233233" observedRunningTime="2026-01-26 15:58:47.750673508 +0000 UTC m=+1502.887690743" watchObservedRunningTime="2026-01-26 15:58:47.75253705 +0000 UTC m=+1502.889554285" Jan 26 15:58:47 crc kubenswrapper[4713]: I0126 15:58:47.773010 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 15:58:47 crc kubenswrapper[4713]: I0126 15:58:47.773076 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 15:58:47 crc kubenswrapper[4713]: I0126 15:58:47.835023 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 15:58:47 crc kubenswrapper[4713]: I0126 15:58:47.835069 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 15:58:47 crc kubenswrapper[4713]: I0126 15:58:47.835083 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 15:58:47 crc kubenswrapper[4713]: I0126 15:58:47.856590 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:58:48 crc kubenswrapper[4713]: I0126 15:58:48.010461 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-2j86n"] Jan 26 15:58:48 crc kubenswrapper[4713]: I0126 15:58:48.010708 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67bdc55879-2j86n" podUID="b1d0ef70-9f37-4d0c-b317-7100a193699e" containerName="dnsmasq-dns" containerID="cri-o://0b281f1f1cb6be1832f2d972cb83d46d8219295aea07a7ec1a550c00009f5b17" gracePeriod=10 Jan 26 15:58:48 crc kubenswrapper[4713]: I0126 15:58:48.758195 4713 generic.go:334] "Generic (PLEG): container finished" podID="b1d0ef70-9f37-4d0c-b317-7100a193699e" containerID="0b281f1f1cb6be1832f2d972cb83d46d8219295aea07a7ec1a550c00009f5b17" exitCode=0 Jan 26 15:58:48 crc kubenswrapper[4713]: I0126 15:58:48.758255 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-2j86n" event={"ID":"b1d0ef70-9f37-4d0c-b317-7100a193699e","Type":"ContainerDied","Data":"0b281f1f1cb6be1832f2d972cb83d46d8219295aea07a7ec1a550c00009f5b17"} Jan 26 15:58:48 crc kubenswrapper[4713]: I0126 15:58:48.782602 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9e1d7724-b8b1-4865-ad1a-dba30ce76123" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 15:58:48 crc kubenswrapper[4713]: I0126 15:58:48.783132 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9e1d7724-b8b1-4865-ad1a-dba30ce76123" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 15:58:48 crc kubenswrapper[4713]: I0126 15:58:48.828769 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.398573 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.518226 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-ovsdbserver-nb\") pod \"b1d0ef70-9f37-4d0c-b317-7100a193699e\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.518519 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxrr7\" (UniqueName: \"kubernetes.io/projected/b1d0ef70-9f37-4d0c-b317-7100a193699e-kube-api-access-wxrr7\") pod \"b1d0ef70-9f37-4d0c-b317-7100a193699e\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.518605 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-ovsdbserver-sb\") pod \"b1d0ef70-9f37-4d0c-b317-7100a193699e\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.518635 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-dns-svc\") pod \"b1d0ef70-9f37-4d0c-b317-7100a193699e\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.518654 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-config\") pod \"b1d0ef70-9f37-4d0c-b317-7100a193699e\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.518731 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-dns-swift-storage-0\") pod \"b1d0ef70-9f37-4d0c-b317-7100a193699e\" (UID: \"b1d0ef70-9f37-4d0c-b317-7100a193699e\") " Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.533840 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d0ef70-9f37-4d0c-b317-7100a193699e-kube-api-access-wxrr7" (OuterVolumeSpecName: "kube-api-access-wxrr7") pod "b1d0ef70-9f37-4d0c-b317-7100a193699e" (UID: "b1d0ef70-9f37-4d0c-b317-7100a193699e"). InnerVolumeSpecName "kube-api-access-wxrr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.603846 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-config" (OuterVolumeSpecName: "config") pod "b1d0ef70-9f37-4d0c-b317-7100a193699e" (UID: "b1d0ef70-9f37-4d0c-b317-7100a193699e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.603867 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b1d0ef70-9f37-4d0c-b317-7100a193699e" (UID: "b1d0ef70-9f37-4d0c-b317-7100a193699e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.615322 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b1d0ef70-9f37-4d0c-b317-7100a193699e" (UID: "b1d0ef70-9f37-4d0c-b317-7100a193699e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.621412 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.621436 4713 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.621447 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.621456 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxrr7\" (UniqueName: \"kubernetes.io/projected/b1d0ef70-9f37-4d0c-b317-7100a193699e-kube-api-access-wxrr7\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.629910 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1d0ef70-9f37-4d0c-b317-7100a193699e" (UID: "b1d0ef70-9f37-4d0c-b317-7100a193699e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.644952 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b1d0ef70-9f37-4d0c-b317-7100a193699e" (UID: "b1d0ef70-9f37-4d0c-b317-7100a193699e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.723422 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.723458 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1d0ef70-9f37-4d0c-b317-7100a193699e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.770722 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-2j86n" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.770751 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-2j86n" event={"ID":"b1d0ef70-9f37-4d0c-b317-7100a193699e","Type":"ContainerDied","Data":"d065e628c087e60c199e96844ee4627d8eb2d7fa10161b3b57c0fdea71f3156b"} Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.770842 4713 scope.go:117] "RemoveContainer" containerID="0b281f1f1cb6be1832f2d972cb83d46d8219295aea07a7ec1a550c00009f5b17" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.792831 4713 scope.go:117] "RemoveContainer" containerID="5b81ac5be36301edc769d18e08a480401fa7f280e2324ee24d364602dd3e9088" Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.825015 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-2j86n"] Jan 26 15:58:49 crc kubenswrapper[4713]: I0126 15:58:49.825056 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-2j86n"] Jan 26 15:58:51 crc kubenswrapper[4713]: I0126 15:58:51.795290 4713 generic.go:334] "Generic (PLEG): container finished" podID="c1ce4cf0-e8a1-4475-a238-667b42cb429b" containerID="e7f8ca8d156e794fc997d7c9ecfe0aafb1105b85b44dfe366a08b726a95018ef" exitCode=0 Jan 26 15:58:51 crc kubenswrapper[4713]: I0126 15:58:51.795391 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xjrxt" event={"ID":"c1ce4cf0-e8a1-4475-a238-667b42cb429b","Type":"ContainerDied","Data":"e7f8ca8d156e794fc997d7c9ecfe0aafb1105b85b44dfe366a08b726a95018ef"} Jan 26 15:58:51 crc kubenswrapper[4713]: I0126 15:58:51.815303 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d0ef70-9f37-4d0c-b317-7100a193699e" path="/var/lib/kubelet/pods/b1d0ef70-9f37-4d0c-b317-7100a193699e/volumes" Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.248146 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.303213 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-scripts\") pod \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.303289 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4pb2\" (UniqueName: \"kubernetes.io/projected/c1ce4cf0-e8a1-4475-a238-667b42cb429b-kube-api-access-x4pb2\") pod \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.303318 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-config-data\") pod \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.303647 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-combined-ca-bundle\") pod \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\" (UID: \"c1ce4cf0-e8a1-4475-a238-667b42cb429b\") " Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.310888 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-scripts" (OuterVolumeSpecName: "scripts") pod "c1ce4cf0-e8a1-4475-a238-667b42cb429b" (UID: "c1ce4cf0-e8a1-4475-a238-667b42cb429b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.311551 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1ce4cf0-e8a1-4475-a238-667b42cb429b-kube-api-access-x4pb2" (OuterVolumeSpecName: "kube-api-access-x4pb2") pod "c1ce4cf0-e8a1-4475-a238-667b42cb429b" (UID: "c1ce4cf0-e8a1-4475-a238-667b42cb429b"). InnerVolumeSpecName "kube-api-access-x4pb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.335548 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1ce4cf0-e8a1-4475-a238-667b42cb429b" (UID: "c1ce4cf0-e8a1-4475-a238-667b42cb429b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.344424 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-config-data" (OuterVolumeSpecName: "config-data") pod "c1ce4cf0-e8a1-4475-a238-667b42cb429b" (UID: "c1ce4cf0-e8a1-4475-a238-667b42cb429b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.418849 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.419794 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.419849 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4pb2\" (UniqueName: \"kubernetes.io/projected/c1ce4cf0-e8a1-4475-a238-667b42cb429b-kube-api-access-x4pb2\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.419893 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ce4cf0-e8a1-4475-a238-667b42cb429b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.828312 4713 generic.go:334] "Generic (PLEG): container finished" podID="c1ab2adc-59f5-4803-b758-0a88857830b0" containerID="bbf90d2cda31f005a60965d0273df86e11b2a78d6391634f41f11752905622b8" exitCode=0 Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.828555 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7xzpc" event={"ID":"c1ab2adc-59f5-4803-b758-0a88857830b0","Type":"ContainerDied","Data":"bbf90d2cda31f005a60965d0273df86e11b2a78d6391634f41f11752905622b8"} Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.833884 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xjrxt" event={"ID":"c1ce4cf0-e8a1-4475-a238-667b42cb429b","Type":"ContainerDied","Data":"6b1cdfdf48e500aefe15560b6561fcb99fa1a0b57592b7e75803f075ba50d33b"} Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.833945 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b1cdfdf48e500aefe15560b6561fcb99fa1a0b57592b7e75803f075ba50d33b" Jan 26 15:58:53 crc kubenswrapper[4713]: I0126 15:58:53.833958 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xjrxt" Jan 26 15:58:54 crc kubenswrapper[4713]: I0126 15:58:54.051532 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:58:54 crc kubenswrapper[4713]: I0126 15:58:54.051904 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9e1d7724-b8b1-4865-ad1a-dba30ce76123" containerName="nova-api-log" containerID="cri-o://96cc34764eb2231e9bd316f7b9fb0c856ba2c373b70a906fc8dabc2146ca2f29" gracePeriod=30 Jan 26 15:58:54 crc kubenswrapper[4713]: I0126 15:58:54.052526 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9e1d7724-b8b1-4865-ad1a-dba30ce76123" containerName="nova-api-api" containerID="cri-o://eb9ab08347c63aed6b9a8259e9e9b86139a9a0d55bf45bdc2325d88eb4d6df27" gracePeriod=30 Jan 26 15:58:54 crc kubenswrapper[4713]: I0126 15:58:54.068026 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:58:54 crc kubenswrapper[4713]: I0126 15:58:54.068267 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c2e2546e-6334-4dab-bcbf-4fca1e6b83aa" containerName="nova-scheduler-scheduler" containerID="cri-o://d958a16825a4122f3bbd66df879649b6df598aeccc12e5b3ac3ecf13c06a21ab" gracePeriod=30 Jan 26 15:58:54 crc kubenswrapper[4713]: I0126 15:58:54.847452 4713 generic.go:334] "Generic (PLEG): container finished" podID="9e1d7724-b8b1-4865-ad1a-dba30ce76123" containerID="96cc34764eb2231e9bd316f7b9fb0c856ba2c373b70a906fc8dabc2146ca2f29" exitCode=143 Jan 26 15:58:54 crc kubenswrapper[4713]: I0126 15:58:54.847928 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1d7724-b8b1-4865-ad1a-dba30ce76123","Type":"ContainerDied","Data":"96cc34764eb2231e9bd316f7b9fb0c856ba2c373b70a906fc8dabc2146ca2f29"} Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.425932 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.567290 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkrbq\" (UniqueName: \"kubernetes.io/projected/c1ab2adc-59f5-4803-b758-0a88857830b0-kube-api-access-kkrbq\") pod \"c1ab2adc-59f5-4803-b758-0a88857830b0\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.567691 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-combined-ca-bundle\") pod \"c1ab2adc-59f5-4803-b758-0a88857830b0\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.567807 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-config-data\") pod \"c1ab2adc-59f5-4803-b758-0a88857830b0\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.567940 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-scripts\") pod \"c1ab2adc-59f5-4803-b758-0a88857830b0\" (UID: \"c1ab2adc-59f5-4803-b758-0a88857830b0\") " Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.572909 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-scripts" (OuterVolumeSpecName: "scripts") pod "c1ab2adc-59f5-4803-b758-0a88857830b0" (UID: "c1ab2adc-59f5-4803-b758-0a88857830b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.573115 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1ab2adc-59f5-4803-b758-0a88857830b0-kube-api-access-kkrbq" (OuterVolumeSpecName: "kube-api-access-kkrbq") pod "c1ab2adc-59f5-4803-b758-0a88857830b0" (UID: "c1ab2adc-59f5-4803-b758-0a88857830b0"). InnerVolumeSpecName "kube-api-access-kkrbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.601446 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-config-data" (OuterVolumeSpecName: "config-data") pod "c1ab2adc-59f5-4803-b758-0a88857830b0" (UID: "c1ab2adc-59f5-4803-b758-0a88857830b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.604566 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1ab2adc-59f5-4803-b758-0a88857830b0" (UID: "c1ab2adc-59f5-4803-b758-0a88857830b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.670128 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkrbq\" (UniqueName: \"kubernetes.io/projected/c1ab2adc-59f5-4803-b758-0a88857830b0-kube-api-access-kkrbq\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.670162 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.670171 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.670180 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ab2adc-59f5-4803-b758-0a88857830b0-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.860492 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7xzpc" event={"ID":"c1ab2adc-59f5-4803-b758-0a88857830b0","Type":"ContainerDied","Data":"7cc4a21cbaae61d6f088d100f80639675c14d6dc1a22dd1e5afa8f1cc4128952"} Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.860546 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cc4a21cbaae61d6f088d100f80639675c14d6dc1a22dd1e5afa8f1cc4128952" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.860576 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7xzpc" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.971045 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 15:58:55 crc kubenswrapper[4713]: E0126 15:58:55.971794 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d0ef70-9f37-4d0c-b317-7100a193699e" containerName="dnsmasq-dns" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.971826 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d0ef70-9f37-4d0c-b317-7100a193699e" containerName="dnsmasq-dns" Jan 26 15:58:55 crc kubenswrapper[4713]: E0126 15:58:55.971853 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ce4cf0-e8a1-4475-a238-667b42cb429b" containerName="nova-manage" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.971865 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ce4cf0-e8a1-4475-a238-667b42cb429b" containerName="nova-manage" Jan 26 15:58:55 crc kubenswrapper[4713]: E0126 15:58:55.971902 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab2adc-59f5-4803-b758-0a88857830b0" containerName="nova-cell1-conductor-db-sync" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.971933 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab2adc-59f5-4803-b758-0a88857830b0" containerName="nova-cell1-conductor-db-sync" Jan 26 15:58:55 crc kubenswrapper[4713]: E0126 15:58:55.971971 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d0ef70-9f37-4d0c-b317-7100a193699e" containerName="init" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.971984 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d0ef70-9f37-4d0c-b317-7100a193699e" containerName="init" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.972305 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ab2adc-59f5-4803-b758-0a88857830b0" containerName="nova-cell1-conductor-db-sync" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.972390 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d0ef70-9f37-4d0c-b317-7100a193699e" containerName="dnsmasq-dns" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.972418 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ce4cf0-e8a1-4475-a238-667b42cb429b" containerName="nova-manage" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.973598 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.978147 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 15:58:55 crc kubenswrapper[4713]: I0126 15:58:55.997129 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 15:58:56 crc kubenswrapper[4713]: I0126 15:58:56.078966 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10eb8f34-03cf-4b24-b8fd-63fe3886d2d9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"10eb8f34-03cf-4b24-b8fd-63fe3886d2d9\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:58:56 crc kubenswrapper[4713]: I0126 15:58:56.079059 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw9jr\" (UniqueName: \"kubernetes.io/projected/10eb8f34-03cf-4b24-b8fd-63fe3886d2d9-kube-api-access-kw9jr\") pod \"nova-cell1-conductor-0\" (UID: \"10eb8f34-03cf-4b24-b8fd-63fe3886d2d9\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:58:56 crc kubenswrapper[4713]: I0126 15:58:56.079131 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10eb8f34-03cf-4b24-b8fd-63fe3886d2d9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"10eb8f34-03cf-4b24-b8fd-63fe3886d2d9\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:58:56 crc kubenswrapper[4713]: I0126 15:58:56.181286 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10eb8f34-03cf-4b24-b8fd-63fe3886d2d9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"10eb8f34-03cf-4b24-b8fd-63fe3886d2d9\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:58:56 crc kubenswrapper[4713]: I0126 15:58:56.181454 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10eb8f34-03cf-4b24-b8fd-63fe3886d2d9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"10eb8f34-03cf-4b24-b8fd-63fe3886d2d9\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:58:56 crc kubenswrapper[4713]: I0126 15:58:56.181601 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw9jr\" (UniqueName: \"kubernetes.io/projected/10eb8f34-03cf-4b24-b8fd-63fe3886d2d9-kube-api-access-kw9jr\") pod \"nova-cell1-conductor-0\" (UID: \"10eb8f34-03cf-4b24-b8fd-63fe3886d2d9\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:58:56 crc kubenswrapper[4713]: I0126 15:58:56.186157 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10eb8f34-03cf-4b24-b8fd-63fe3886d2d9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"10eb8f34-03cf-4b24-b8fd-63fe3886d2d9\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:58:56 crc kubenswrapper[4713]: I0126 15:58:56.190478 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10eb8f34-03cf-4b24-b8fd-63fe3886d2d9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"10eb8f34-03cf-4b24-b8fd-63fe3886d2d9\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:58:56 crc kubenswrapper[4713]: I0126 15:58:56.207549 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw9jr\" (UniqueName: \"kubernetes.io/projected/10eb8f34-03cf-4b24-b8fd-63fe3886d2d9-kube-api-access-kw9jr\") pod \"nova-cell1-conductor-0\" (UID: \"10eb8f34-03cf-4b24-b8fd-63fe3886d2d9\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:58:56 crc kubenswrapper[4713]: I0126 15:58:56.304039 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 15:58:56 crc kubenswrapper[4713]: I0126 15:58:56.867811 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 15:58:57 crc kubenswrapper[4713]: E0126 15:58:57.774204 4713 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d958a16825a4122f3bbd66df879649b6df598aeccc12e5b3ac3ecf13c06a21ab is running failed: container process not found" containerID="d958a16825a4122f3bbd66df879649b6df598aeccc12e5b3ac3ecf13c06a21ab" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 15:58:57 crc kubenswrapper[4713]: E0126 15:58:57.775428 4713 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d958a16825a4122f3bbd66df879649b6df598aeccc12e5b3ac3ecf13c06a21ab is running failed: container process not found" containerID="d958a16825a4122f3bbd66df879649b6df598aeccc12e5b3ac3ecf13c06a21ab" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 15:58:57 crc kubenswrapper[4713]: E0126 15:58:57.776004 4713 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d958a16825a4122f3bbd66df879649b6df598aeccc12e5b3ac3ecf13c06a21ab is running failed: container process not found" containerID="d958a16825a4122f3bbd66df879649b6df598aeccc12e5b3ac3ecf13c06a21ab" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 15:58:57 crc kubenswrapper[4713]: E0126 15:58:57.776104 4713 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d958a16825a4122f3bbd66df879649b6df598aeccc12e5b3ac3ecf13c06a21ab is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c2e2546e-6334-4dab-bcbf-4fca1e6b83aa" containerName="nova-scheduler-scheduler" Jan 26 15:58:57 crc kubenswrapper[4713]: I0126 15:58:57.920583 4713 generic.go:334] "Generic (PLEG): container finished" podID="c2e2546e-6334-4dab-bcbf-4fca1e6b83aa" containerID="d958a16825a4122f3bbd66df879649b6df598aeccc12e5b3ac3ecf13c06a21ab" exitCode=0 Jan 26 15:58:57 crc kubenswrapper[4713]: I0126 15:58:57.920660 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa","Type":"ContainerDied","Data":"d958a16825a4122f3bbd66df879649b6df598aeccc12e5b3ac3ecf13c06a21ab"} Jan 26 15:58:57 crc kubenswrapper[4713]: I0126 15:58:57.927725 4713 generic.go:334] "Generic (PLEG): container finished" podID="9e1d7724-b8b1-4865-ad1a-dba30ce76123" containerID="eb9ab08347c63aed6b9a8259e9e9b86139a9a0d55bf45bdc2325d88eb4d6df27" exitCode=0 Jan 26 15:58:57 crc kubenswrapper[4713]: I0126 15:58:57.927800 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1d7724-b8b1-4865-ad1a-dba30ce76123","Type":"ContainerDied","Data":"eb9ab08347c63aed6b9a8259e9e9b86139a9a0d55bf45bdc2325d88eb4d6df27"} Jan 26 15:58:57 crc kubenswrapper[4713]: I0126 15:58:57.935008 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"10eb8f34-03cf-4b24-b8fd-63fe3886d2d9","Type":"ContainerStarted","Data":"a884b9d2d6b06a68150134010a9d7d515b00638812c28ce7ae6058140238b660"} Jan 26 15:58:57 crc kubenswrapper[4713]: I0126 15:58:57.935056 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"10eb8f34-03cf-4b24-b8fd-63fe3886d2d9","Type":"ContainerStarted","Data":"21bc43ed6588a846d368ba8e6e6cdf19a73a2d661d595049f0d9ccbe04b78cf6"} Jan 26 15:58:57 crc kubenswrapper[4713]: I0126 15:58:57.935211 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 15:58:57 crc kubenswrapper[4713]: I0126 15:58:57.958154 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.958132372 podStartE2EDuration="2.958132372s" podCreationTimestamp="2026-01-26 15:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:58:57.951471595 +0000 UTC m=+1513.088488830" watchObservedRunningTime="2026-01-26 15:58:57.958132372 +0000 UTC m=+1513.095149607" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.447773 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.535946 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-combined-ca-bundle\") pod \"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa\" (UID: \"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa\") " Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.536413 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46bv5\" (UniqueName: \"kubernetes.io/projected/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-kube-api-access-46bv5\") pod \"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa\" (UID: \"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa\") " Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.536634 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-config-data\") pod \"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa\" (UID: \"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa\") " Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.540885 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-kube-api-access-46bv5" (OuterVolumeSpecName: "kube-api-access-46bv5") pod "c2e2546e-6334-4dab-bcbf-4fca1e6b83aa" (UID: "c2e2546e-6334-4dab-bcbf-4fca1e6b83aa"). InnerVolumeSpecName "kube-api-access-46bv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.566247 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2e2546e-6334-4dab-bcbf-4fca1e6b83aa" (UID: "c2e2546e-6334-4dab-bcbf-4fca1e6b83aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.567535 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-config-data" (OuterVolumeSpecName: "config-data") pod "c2e2546e-6334-4dab-bcbf-4fca1e6b83aa" (UID: "c2e2546e-6334-4dab-bcbf-4fca1e6b83aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.623755 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.639455 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46bv5\" (UniqueName: \"kubernetes.io/projected/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-kube-api-access-46bv5\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.639486 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.639500 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.740727 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e1d7724-b8b1-4865-ad1a-dba30ce76123-logs\") pod \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.740841 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25z5h\" (UniqueName: \"kubernetes.io/projected/9e1d7724-b8b1-4865-ad1a-dba30ce76123-kube-api-access-25z5h\") pod \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.740896 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e1d7724-b8b1-4865-ad1a-dba30ce76123-config-data\") pod \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.740975 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e1d7724-b8b1-4865-ad1a-dba30ce76123-combined-ca-bundle\") pod \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\" (UID: \"9e1d7724-b8b1-4865-ad1a-dba30ce76123\") " Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.741328 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e1d7724-b8b1-4865-ad1a-dba30ce76123-logs" (OuterVolumeSpecName: "logs") pod "9e1d7724-b8b1-4865-ad1a-dba30ce76123" (UID: "9e1d7724-b8b1-4865-ad1a-dba30ce76123"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.741707 4713 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e1d7724-b8b1-4865-ad1a-dba30ce76123-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.746609 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e1d7724-b8b1-4865-ad1a-dba30ce76123-kube-api-access-25z5h" (OuterVolumeSpecName: "kube-api-access-25z5h") pod "9e1d7724-b8b1-4865-ad1a-dba30ce76123" (UID: "9e1d7724-b8b1-4865-ad1a-dba30ce76123"). InnerVolumeSpecName "kube-api-access-25z5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.775396 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e1d7724-b8b1-4865-ad1a-dba30ce76123-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e1d7724-b8b1-4865-ad1a-dba30ce76123" (UID: "9e1d7724-b8b1-4865-ad1a-dba30ce76123"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.775876 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e1d7724-b8b1-4865-ad1a-dba30ce76123-config-data" (OuterVolumeSpecName: "config-data") pod "9e1d7724-b8b1-4865-ad1a-dba30ce76123" (UID: "9e1d7724-b8b1-4865-ad1a-dba30ce76123"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.843644 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e1d7724-b8b1-4865-ad1a-dba30ce76123-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.843695 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25z5h\" (UniqueName: \"kubernetes.io/projected/9e1d7724-b8b1-4865-ad1a-dba30ce76123-kube-api-access-25z5h\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.843713 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e1d7724-b8b1-4865-ad1a-dba30ce76123-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.950177 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1d7724-b8b1-4865-ad1a-dba30ce76123","Type":"ContainerDied","Data":"c0a2ebd1d583063d53706bcf28bcb7ecc7f9013b6828d5df280d7463e87c9bed"} Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.950246 4713 scope.go:117] "RemoveContainer" containerID="eb9ab08347c63aed6b9a8259e9e9b86139a9a0d55bf45bdc2325d88eb4d6df27" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.950197 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.953279 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.953882 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c2e2546e-6334-4dab-bcbf-4fca1e6b83aa","Type":"ContainerDied","Data":"3785918717c90f79cd25467bd65f0f157e9424702f1805052f826fd5c816e85a"} Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.980055 4713 scope.go:117] "RemoveContainer" containerID="96cc34764eb2231e9bd316f7b9fb0c856ba2c373b70a906fc8dabc2146ca2f29" Jan 26 15:58:58 crc kubenswrapper[4713]: I0126 15:58:58.992143 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.004090 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.013195 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.018162 4713 scope.go:117] "RemoveContainer" containerID="d958a16825a4122f3bbd66df879649b6df598aeccc12e5b3ac3ecf13c06a21ab" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.025402 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.034841 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:58:59 crc kubenswrapper[4713]: E0126 15:58:59.035592 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e2546e-6334-4dab-bcbf-4fca1e6b83aa" containerName="nova-scheduler-scheduler" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.035634 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e2546e-6334-4dab-bcbf-4fca1e6b83aa" containerName="nova-scheduler-scheduler" Jan 26 15:58:59 crc kubenswrapper[4713]: E0126 15:58:59.035646 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e1d7724-b8b1-4865-ad1a-dba30ce76123" containerName="nova-api-api" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.035654 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e1d7724-b8b1-4865-ad1a-dba30ce76123" containerName="nova-api-api" Jan 26 15:58:59 crc kubenswrapper[4713]: E0126 15:58:59.035666 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e1d7724-b8b1-4865-ad1a-dba30ce76123" containerName="nova-api-log" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.035672 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e1d7724-b8b1-4865-ad1a-dba30ce76123" containerName="nova-api-log" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.035948 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e1d7724-b8b1-4865-ad1a-dba30ce76123" containerName="nova-api-log" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.035973 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e1d7724-b8b1-4865-ad1a-dba30ce76123" containerName="nova-api-api" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.035992 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e2546e-6334-4dab-bcbf-4fca1e6b83aa" containerName="nova-scheduler-scheduler" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.037007 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.038698 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.052760 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.064436 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.066277 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.070845 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.081333 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.150525 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95fwh\" (UniqueName: \"kubernetes.io/projected/f47dedd0-c816-416f-a64b-aa5fb5674ea7-kube-api-access-95fwh\") pod \"nova-scheduler-0\" (UID: \"f47dedd0-c816-416f-a64b-aa5fb5674ea7\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.150605 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25ngk\" (UniqueName: \"kubernetes.io/projected/b05981a0-34f5-4e73-936d-6c7d464cb13a-kube-api-access-25ngk\") pod \"nova-api-0\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " pod="openstack/nova-api-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.150779 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05981a0-34f5-4e73-936d-6c7d464cb13a-logs\") pod \"nova-api-0\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " pod="openstack/nova-api-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.150808 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05981a0-34f5-4e73-936d-6c7d464cb13a-config-data\") pod \"nova-api-0\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " pod="openstack/nova-api-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.150869 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f47dedd0-c816-416f-a64b-aa5fb5674ea7-config-data\") pod \"nova-scheduler-0\" (UID: \"f47dedd0-c816-416f-a64b-aa5fb5674ea7\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.150928 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05981a0-34f5-4e73-936d-6c7d464cb13a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " pod="openstack/nova-api-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.150961 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f47dedd0-c816-416f-a64b-aa5fb5674ea7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f47dedd0-c816-416f-a64b-aa5fb5674ea7\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.253167 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05981a0-34f5-4e73-936d-6c7d464cb13a-logs\") pod \"nova-api-0\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " pod="openstack/nova-api-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.253216 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05981a0-34f5-4e73-936d-6c7d464cb13a-config-data\") pod \"nova-api-0\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " pod="openstack/nova-api-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.253274 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f47dedd0-c816-416f-a64b-aa5fb5674ea7-config-data\") pod \"nova-scheduler-0\" (UID: \"f47dedd0-c816-416f-a64b-aa5fb5674ea7\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.253335 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05981a0-34f5-4e73-936d-6c7d464cb13a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " pod="openstack/nova-api-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.253401 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f47dedd0-c816-416f-a64b-aa5fb5674ea7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f47dedd0-c816-416f-a64b-aa5fb5674ea7\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.253446 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95fwh\" (UniqueName: \"kubernetes.io/projected/f47dedd0-c816-416f-a64b-aa5fb5674ea7-kube-api-access-95fwh\") pod \"nova-scheduler-0\" (UID: \"f47dedd0-c816-416f-a64b-aa5fb5674ea7\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.253479 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25ngk\" (UniqueName: \"kubernetes.io/projected/b05981a0-34f5-4e73-936d-6c7d464cb13a-kube-api-access-25ngk\") pod \"nova-api-0\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " pod="openstack/nova-api-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.253925 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05981a0-34f5-4e73-936d-6c7d464cb13a-logs\") pod \"nova-api-0\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " pod="openstack/nova-api-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.257697 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f47dedd0-c816-416f-a64b-aa5fb5674ea7-config-data\") pod \"nova-scheduler-0\" (UID: \"f47dedd0-c816-416f-a64b-aa5fb5674ea7\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.258115 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05981a0-34f5-4e73-936d-6c7d464cb13a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " pod="openstack/nova-api-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.259147 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05981a0-34f5-4e73-936d-6c7d464cb13a-config-data\") pod \"nova-api-0\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " pod="openstack/nova-api-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.260026 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f47dedd0-c816-416f-a64b-aa5fb5674ea7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f47dedd0-c816-416f-a64b-aa5fb5674ea7\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.272088 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95fwh\" (UniqueName: \"kubernetes.io/projected/f47dedd0-c816-416f-a64b-aa5fb5674ea7-kube-api-access-95fwh\") pod \"nova-scheduler-0\" (UID: \"f47dedd0-c816-416f-a64b-aa5fb5674ea7\") " pod="openstack/nova-scheduler-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.277869 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25ngk\" (UniqueName: \"kubernetes.io/projected/b05981a0-34f5-4e73-936d-6c7d464cb13a-kube-api-access-25ngk\") pod \"nova-api-0\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " pod="openstack/nova-api-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.363862 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.388310 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.815127 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e1d7724-b8b1-4865-ad1a-dba30ce76123" path="/var/lib/kubelet/pods/9e1d7724-b8b1-4865-ad1a-dba30ce76123/volumes" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.816534 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2e2546e-6334-4dab-bcbf-4fca1e6b83aa" path="/var/lib/kubelet/pods/c2e2546e-6334-4dab-bcbf-4fca1e6b83aa/volumes" Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.864114 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.965803 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f47dedd0-c816-416f-a64b-aa5fb5674ea7","Type":"ContainerStarted","Data":"821d5b677e9a301c283ac821866f8e20c5be0464bb8865d6bce666e092de20ec"} Jan 26 15:58:59 crc kubenswrapper[4713]: I0126 15:58:59.989462 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:58:59 crc kubenswrapper[4713]: W0126 15:58:59.989953 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb05981a0_34f5_4e73_936d_6c7d464cb13a.slice/crio-8b7ed8e1f19023ce5081c0ac141f400d5eaca89a6e9239e9786f0bc4daa53221 WatchSource:0}: Error finding container 8b7ed8e1f19023ce5081c0ac141f400d5eaca89a6e9239e9786f0bc4daa53221: Status 404 returned error can't find the container with id 8b7ed8e1f19023ce5081c0ac141f400d5eaca89a6e9239e9786f0bc4daa53221 Jan 26 15:59:00 crc kubenswrapper[4713]: I0126 15:59:00.983597 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f47dedd0-c816-416f-a64b-aa5fb5674ea7","Type":"ContainerStarted","Data":"1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7"} Jan 26 15:59:00 crc kubenswrapper[4713]: I0126 15:59:00.987734 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b05981a0-34f5-4e73-936d-6c7d464cb13a","Type":"ContainerStarted","Data":"43434f8098423cf2657b7b1af7bf3e2f37a32ccb385420fba2638fd1fd8e4435"} Jan 26 15:59:00 crc kubenswrapper[4713]: I0126 15:59:00.987773 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b05981a0-34f5-4e73-936d-6c7d464cb13a","Type":"ContainerStarted","Data":"cd821c841a6826f49063f70ed81bcfbc021b011a8a943297810af970e2cf39fa"} Jan 26 15:59:00 crc kubenswrapper[4713]: I0126 15:59:00.987787 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b05981a0-34f5-4e73-936d-6c7d464cb13a","Type":"ContainerStarted","Data":"8b7ed8e1f19023ce5081c0ac141f400d5eaca89a6e9239e9786f0bc4daa53221"} Jan 26 15:59:01 crc kubenswrapper[4713]: I0126 15:59:01.024539 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.024513754 podStartE2EDuration="3.024513754s" podCreationTimestamp="2026-01-26 15:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:01.017751514 +0000 UTC m=+1516.154768759" watchObservedRunningTime="2026-01-26 15:59:01.024513754 +0000 UTC m=+1516.161530999" Jan 26 15:59:01 crc kubenswrapper[4713]: I0126 15:59:01.060178 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.060144895 podStartE2EDuration="2.060144895s" podCreationTimestamp="2026-01-26 15:58:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:01.045831003 +0000 UTC m=+1516.182848268" watchObservedRunningTime="2026-01-26 15:59:01.060144895 +0000 UTC m=+1516.197162140" Jan 26 15:59:04 crc kubenswrapper[4713]: I0126 15:59:04.364557 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 15:59:06 crc kubenswrapper[4713]: I0126 15:59:06.354693 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 15:59:08 crc kubenswrapper[4713]: I0126 15:59:08.444170 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 15:59:09 crc kubenswrapper[4713]: I0126 15:59:09.364794 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 15:59:09 crc kubenswrapper[4713]: I0126 15:59:09.390458 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 15:59:09 crc kubenswrapper[4713]: I0126 15:59:09.390507 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 15:59:09 crc kubenswrapper[4713]: I0126 15:59:09.397977 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 15:59:10 crc kubenswrapper[4713]: I0126 15:59:10.157755 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 15:59:10 crc kubenswrapper[4713]: I0126 15:59:10.472544 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b05981a0-34f5-4e73-936d-6c7d464cb13a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 15:59:10 crc kubenswrapper[4713]: I0126 15:59:10.472553 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b05981a0-34f5-4e73-936d-6c7d464cb13a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 15:59:12 crc kubenswrapper[4713]: I0126 15:59:12.180148 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:59:12 crc kubenswrapper[4713]: I0126 15:59:12.180885 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4" containerName="kube-state-metrics" containerID="cri-o://553d5425d14835985d42398b190141105d49254c8fb23e9ee1b8895389bc82be" gracePeriod=30 Jan 26 15:59:12 crc kubenswrapper[4713]: I0126 15:59:12.782715 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 15:59:12 crc kubenswrapper[4713]: I0126 15:59:12.858676 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx4pj\" (UniqueName: \"kubernetes.io/projected/2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4-kube-api-access-gx4pj\") pod \"2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4\" (UID: \"2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4\") " Jan 26 15:59:12 crc kubenswrapper[4713]: I0126 15:59:12.865921 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4-kube-api-access-gx4pj" (OuterVolumeSpecName: "kube-api-access-gx4pj") pod "2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4" (UID: "2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4"). InnerVolumeSpecName "kube-api-access-gx4pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:12 crc kubenswrapper[4713]: I0126 15:59:12.961466 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx4pj\" (UniqueName: \"kubernetes.io/projected/2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4-kube-api-access-gx4pj\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.139636 4713 generic.go:334] "Generic (PLEG): container finished" podID="2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4" containerID="553d5425d14835985d42398b190141105d49254c8fb23e9ee1b8895389bc82be" exitCode=2 Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.139733 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.140151 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4","Type":"ContainerDied","Data":"553d5425d14835985d42398b190141105d49254c8fb23e9ee1b8895389bc82be"} Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.140213 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4","Type":"ContainerDied","Data":"a8789b756003864cdaee3061f6811d27a4becad5513a9266ac4068517c34f243"} Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.140235 4713 scope.go:117] "RemoveContainer" containerID="553d5425d14835985d42398b190141105d49254c8fb23e9ee1b8895389bc82be" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.177922 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.181913 4713 scope.go:117] "RemoveContainer" containerID="553d5425d14835985d42398b190141105d49254c8fb23e9ee1b8895389bc82be" Jan 26 15:59:13 crc kubenswrapper[4713]: E0126 15:59:13.182494 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"553d5425d14835985d42398b190141105d49254c8fb23e9ee1b8895389bc82be\": container with ID starting with 553d5425d14835985d42398b190141105d49254c8fb23e9ee1b8895389bc82be not found: ID does not exist" containerID="553d5425d14835985d42398b190141105d49254c8fb23e9ee1b8895389bc82be" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.182533 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"553d5425d14835985d42398b190141105d49254c8fb23e9ee1b8895389bc82be"} err="failed to get container status \"553d5425d14835985d42398b190141105d49254c8fb23e9ee1b8895389bc82be\": rpc error: code = NotFound desc = could not find container \"553d5425d14835985d42398b190141105d49254c8fb23e9ee1b8895389bc82be\": container with ID starting with 553d5425d14835985d42398b190141105d49254c8fb23e9ee1b8895389bc82be not found: ID does not exist" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.209786 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.224023 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:59:13 crc kubenswrapper[4713]: E0126 15:59:13.224547 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4" containerName="kube-state-metrics" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.224568 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4" containerName="kube-state-metrics" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.224845 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4" containerName="kube-state-metrics" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.225712 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.228182 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.229293 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.235437 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.266807 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32ad743-8c23-46d2-83aa-4eef34971aa7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b32ad743-8c23-46d2-83aa-4eef34971aa7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.266869 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32ad743-8c23-46d2-83aa-4eef34971aa7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b32ad743-8c23-46d2-83aa-4eef34971aa7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.266964 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b32ad743-8c23-46d2-83aa-4eef34971aa7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b32ad743-8c23-46d2-83aa-4eef34971aa7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.267064 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5dlm\" (UniqueName: \"kubernetes.io/projected/b32ad743-8c23-46d2-83aa-4eef34971aa7-kube-api-access-d5dlm\") pod \"kube-state-metrics-0\" (UID: \"b32ad743-8c23-46d2-83aa-4eef34971aa7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.369011 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5dlm\" (UniqueName: \"kubernetes.io/projected/b32ad743-8c23-46d2-83aa-4eef34971aa7-kube-api-access-d5dlm\") pod \"kube-state-metrics-0\" (UID: \"b32ad743-8c23-46d2-83aa-4eef34971aa7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.369159 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32ad743-8c23-46d2-83aa-4eef34971aa7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b32ad743-8c23-46d2-83aa-4eef34971aa7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.369865 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32ad743-8c23-46d2-83aa-4eef34971aa7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b32ad743-8c23-46d2-83aa-4eef34971aa7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.369933 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b32ad743-8c23-46d2-83aa-4eef34971aa7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b32ad743-8c23-46d2-83aa-4eef34971aa7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.373931 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32ad743-8c23-46d2-83aa-4eef34971aa7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b32ad743-8c23-46d2-83aa-4eef34971aa7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.376206 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32ad743-8c23-46d2-83aa-4eef34971aa7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b32ad743-8c23-46d2-83aa-4eef34971aa7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.380079 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b32ad743-8c23-46d2-83aa-4eef34971aa7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b32ad743-8c23-46d2-83aa-4eef34971aa7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.388103 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5dlm\" (UniqueName: \"kubernetes.io/projected/b32ad743-8c23-46d2-83aa-4eef34971aa7-kube-api-access-d5dlm\") pod \"kube-state-metrics-0\" (UID: \"b32ad743-8c23-46d2-83aa-4eef34971aa7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.540896 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 15:59:13 crc kubenswrapper[4713]: I0126 15:59:13.817694 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4" path="/var/lib/kubelet/pods/2b8ffa12-31aa-4ac3-87c1-952d2dbb47b4/volumes" Jan 26 15:59:14 crc kubenswrapper[4713]: I0126 15:59:14.107625 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:59:14 crc kubenswrapper[4713]: I0126 15:59:14.152311 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b32ad743-8c23-46d2-83aa-4eef34971aa7","Type":"ContainerStarted","Data":"8e329c4910f58d6e9aecc3ac77c0dceff0b52cc8c3822c2528b44ab126990f73"} Jan 26 15:59:14 crc kubenswrapper[4713]: I0126 15:59:14.212210 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:14 crc kubenswrapper[4713]: I0126 15:59:14.212511 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="ceilometer-central-agent" containerID="cri-o://d758fbd52089db58a182ef7c8b0dc5567ac3b57962ebcbfb8f8407df85611971" gracePeriod=30 Jan 26 15:59:14 crc kubenswrapper[4713]: I0126 15:59:14.212551 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="proxy-httpd" containerID="cri-o://4ea2a33c54e8a47adebd1a7af432e934fc592c21480ed0afb4e9b361a8c5e599" gracePeriod=30 Jan 26 15:59:14 crc kubenswrapper[4713]: I0126 15:59:14.212589 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="ceilometer-notification-agent" containerID="cri-o://a2a4fdf09660907b6c5af96cc093ce8e78520a816cd5159b7426a081e1cc46d6" gracePeriod=30 Jan 26 15:59:14 crc kubenswrapper[4713]: I0126 15:59:14.212609 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="sg-core" containerID="cri-o://fcc38ef85e06c17c154c2831df886a51dccf1efa2f92bba24af7abe9404108a1" gracePeriod=30 Jan 26 15:59:15 crc kubenswrapper[4713]: I0126 15:59:15.162295 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b32ad743-8c23-46d2-83aa-4eef34971aa7","Type":"ContainerStarted","Data":"0755006c32c786f8f9fb1bcf00c6f68488257d6dbd7950ab3f940e4fb304ea99"} Jan 26 15:59:15 crc kubenswrapper[4713]: I0126 15:59:15.162808 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 15:59:15 crc kubenswrapper[4713]: I0126 15:59:15.165347 4713 generic.go:334] "Generic (PLEG): container finished" podID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerID="4ea2a33c54e8a47adebd1a7af432e934fc592c21480ed0afb4e9b361a8c5e599" exitCode=0 Jan 26 15:59:15 crc kubenswrapper[4713]: I0126 15:59:15.165398 4713 generic.go:334] "Generic (PLEG): container finished" podID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerID="fcc38ef85e06c17c154c2831df886a51dccf1efa2f92bba24af7abe9404108a1" exitCode=2 Jan 26 15:59:15 crc kubenswrapper[4713]: I0126 15:59:15.165408 4713 generic.go:334] "Generic (PLEG): container finished" podID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerID="d758fbd52089db58a182ef7c8b0dc5567ac3b57962ebcbfb8f8407df85611971" exitCode=0 Jan 26 15:59:15 crc kubenswrapper[4713]: I0126 15:59:15.165426 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fda03791-ed50-4db3-ab38-8bf1ec8d607d","Type":"ContainerDied","Data":"4ea2a33c54e8a47adebd1a7af432e934fc592c21480ed0afb4e9b361a8c5e599"} Jan 26 15:59:15 crc kubenswrapper[4713]: I0126 15:59:15.165450 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fda03791-ed50-4db3-ab38-8bf1ec8d607d","Type":"ContainerDied","Data":"fcc38ef85e06c17c154c2831df886a51dccf1efa2f92bba24af7abe9404108a1"} Jan 26 15:59:15 crc kubenswrapper[4713]: I0126 15:59:15.165464 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fda03791-ed50-4db3-ab38-8bf1ec8d607d","Type":"ContainerDied","Data":"d758fbd52089db58a182ef7c8b0dc5567ac3b57962ebcbfb8f8407df85611971"} Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.201184 4713 generic.go:334] "Generic (PLEG): container finished" podID="c5dc8ecd-562c-4e2c-be7b-aaf2b088c229" containerID="46768f605c2c316aa7a17408b7431ae8eb8dd67cee4e44fb1b5d9a26c2b99d97" exitCode=137 Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.201278 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229","Type":"ContainerDied","Data":"46768f605c2c316aa7a17408b7431ae8eb8dd67cee4e44fb1b5d9a26c2b99d97"} Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.201909 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229","Type":"ContainerDied","Data":"5ababd947818e86a85dde41bfe1bc57d99a631f34cc561e8d2bc433bd7dccf97"} Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.201930 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ababd947818e86a85dde41bfe1bc57d99a631f34cc561e8d2bc433bd7dccf97" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.211197 4713 generic.go:334] "Generic (PLEG): container finished" podID="82b37183-5b76-4014-aaad-d8356670e767" containerID="1b5cc266fc3d11f0ffd94d83cdc636d994503b59918464cc510c0cd269546c9b" exitCode=137 Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.211297 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82b37183-5b76-4014-aaad-d8356670e767","Type":"ContainerDied","Data":"1b5cc266fc3d11f0ffd94d83cdc636d994503b59918464cc510c0cd269546c9b"} Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.222009 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.247928 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.899594654 podStartE2EDuration="3.247902839s" podCreationTimestamp="2026-01-26 15:59:13 +0000 UTC" firstStartedPulling="2026-01-26 15:59:14.107985792 +0000 UTC m=+1529.245003047" lastFinishedPulling="2026-01-26 15:59:14.456293997 +0000 UTC m=+1529.593311232" observedRunningTime="2026-01-26 15:59:15.181541971 +0000 UTC m=+1530.318559216" watchObservedRunningTime="2026-01-26 15:59:16.247902839 +0000 UTC m=+1531.384920084" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.296665 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.349477 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-config-data\") pod \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.349627 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-combined-ca-bundle\") pod \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.349876 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wxnp\" (UniqueName: \"kubernetes.io/projected/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-kube-api-access-5wxnp\") pod \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.349955 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-logs\") pod \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\" (UID: \"c5dc8ecd-562c-4e2c-be7b-aaf2b088c229\") " Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.351733 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-logs" (OuterVolumeSpecName: "logs") pod "c5dc8ecd-562c-4e2c-be7b-aaf2b088c229" (UID: "c5dc8ecd-562c-4e2c-be7b-aaf2b088c229"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.380569 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-kube-api-access-5wxnp" (OuterVolumeSpecName: "kube-api-access-5wxnp") pod "c5dc8ecd-562c-4e2c-be7b-aaf2b088c229" (UID: "c5dc8ecd-562c-4e2c-be7b-aaf2b088c229"). InnerVolumeSpecName "kube-api-access-5wxnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.408564 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-config-data" (OuterVolumeSpecName: "config-data") pod "c5dc8ecd-562c-4e2c-be7b-aaf2b088c229" (UID: "c5dc8ecd-562c-4e2c-be7b-aaf2b088c229"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.408928 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5dc8ecd-562c-4e2c-be7b-aaf2b088c229" (UID: "c5dc8ecd-562c-4e2c-be7b-aaf2b088c229"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.452350 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkm7w\" (UniqueName: \"kubernetes.io/projected/82b37183-5b76-4014-aaad-d8356670e767-kube-api-access-zkm7w\") pod \"82b37183-5b76-4014-aaad-d8356670e767\" (UID: \"82b37183-5b76-4014-aaad-d8356670e767\") " Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.452642 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82b37183-5b76-4014-aaad-d8356670e767-config-data\") pod \"82b37183-5b76-4014-aaad-d8356670e767\" (UID: \"82b37183-5b76-4014-aaad-d8356670e767\") " Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.452712 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b37183-5b76-4014-aaad-d8356670e767-combined-ca-bundle\") pod \"82b37183-5b76-4014-aaad-d8356670e767\" (UID: \"82b37183-5b76-4014-aaad-d8356670e767\") " Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.453312 4713 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.453348 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.453376 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.453387 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wxnp\" (UniqueName: \"kubernetes.io/projected/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229-kube-api-access-5wxnp\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.462504 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82b37183-5b76-4014-aaad-d8356670e767-kube-api-access-zkm7w" (OuterVolumeSpecName: "kube-api-access-zkm7w") pod "82b37183-5b76-4014-aaad-d8356670e767" (UID: "82b37183-5b76-4014-aaad-d8356670e767"). InnerVolumeSpecName "kube-api-access-zkm7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.555521 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b37183-5b76-4014-aaad-d8356670e767-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82b37183-5b76-4014-aaad-d8356670e767" (UID: "82b37183-5b76-4014-aaad-d8356670e767"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.556799 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b37183-5b76-4014-aaad-d8356670e767-config-data" (OuterVolumeSpecName: "config-data") pod "82b37183-5b76-4014-aaad-d8356670e767" (UID: "82b37183-5b76-4014-aaad-d8356670e767"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.557319 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkm7w\" (UniqueName: \"kubernetes.io/projected/82b37183-5b76-4014-aaad-d8356670e767-kube-api-access-zkm7w\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.557382 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82b37183-5b76-4014-aaad-d8356670e767-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:16 crc kubenswrapper[4713]: I0126 15:59:16.557395 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b37183-5b76-4014-aaad-d8356670e767-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.223131 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.223225 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.224888 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82b37183-5b76-4014-aaad-d8356670e767","Type":"ContainerDied","Data":"2b1e3ac7087227438873d8b41719392655491dd2edc70180b7ebbbccfbedbee6"} Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.224942 4713 scope.go:117] "RemoveContainer" containerID="1b5cc266fc3d11f0ffd94d83cdc636d994503b59918464cc510c0cd269546c9b" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.265133 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.288306 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.302767 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.326431 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.329833 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:59:17 crc kubenswrapper[4713]: E0126 15:59:17.330512 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5dc8ecd-562c-4e2c-be7b-aaf2b088c229" containerName="nova-metadata-log" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.332389 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5dc8ecd-562c-4e2c-be7b-aaf2b088c229" containerName="nova-metadata-log" Jan 26 15:59:17 crc kubenswrapper[4713]: E0126 15:59:17.332511 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5dc8ecd-562c-4e2c-be7b-aaf2b088c229" containerName="nova-metadata-metadata" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.332589 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5dc8ecd-562c-4e2c-be7b-aaf2b088c229" containerName="nova-metadata-metadata" Jan 26 15:59:17 crc kubenswrapper[4713]: E0126 15:59:17.332670 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b37183-5b76-4014-aaad-d8356670e767" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.332731 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b37183-5b76-4014-aaad-d8356670e767" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.333020 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5dc8ecd-562c-4e2c-be7b-aaf2b088c229" containerName="nova-metadata-log" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.333105 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b37183-5b76-4014-aaad-d8356670e767" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.333190 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5dc8ecd-562c-4e2c-be7b-aaf2b088c229" containerName="nova-metadata-metadata" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.334812 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.340003 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.340336 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.340889 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.346398 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.355554 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.357413 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.359279 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.359916 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.368969 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.474398 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc31f1a-f23d-4efd-bf16-3796bc2a948d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc31f1a-f23d-4efd-bf16-3796bc2a948d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.474489 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-logs\") pod \"nova-metadata-0\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.474518 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.474577 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzdvq\" (UniqueName: \"kubernetes.io/projected/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-kube-api-access-vzdvq\") pod \"nova-metadata-0\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.474713 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc31f1a-f23d-4efd-bf16-3796bc2a948d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc31f1a-f23d-4efd-bf16-3796bc2a948d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.474747 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x4cp\" (UniqueName: \"kubernetes.io/projected/6fc31f1a-f23d-4efd-bf16-3796bc2a948d-kube-api-access-4x4cp\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc31f1a-f23d-4efd-bf16-3796bc2a948d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.474796 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.474824 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-config-data\") pod \"nova-metadata-0\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.475074 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc31f1a-f23d-4efd-bf16-3796bc2a948d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc31f1a-f23d-4efd-bf16-3796bc2a948d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.475119 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc31f1a-f23d-4efd-bf16-3796bc2a948d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc31f1a-f23d-4efd-bf16-3796bc2a948d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.577862 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc31f1a-f23d-4efd-bf16-3796bc2a948d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc31f1a-f23d-4efd-bf16-3796bc2a948d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.579158 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x4cp\" (UniqueName: \"kubernetes.io/projected/6fc31f1a-f23d-4efd-bf16-3796bc2a948d-kube-api-access-4x4cp\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc31f1a-f23d-4efd-bf16-3796bc2a948d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.579254 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.579304 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-config-data\") pod \"nova-metadata-0\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.579449 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc31f1a-f23d-4efd-bf16-3796bc2a948d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc31f1a-f23d-4efd-bf16-3796bc2a948d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.579534 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc31f1a-f23d-4efd-bf16-3796bc2a948d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc31f1a-f23d-4efd-bf16-3796bc2a948d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.579572 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc31f1a-f23d-4efd-bf16-3796bc2a948d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc31f1a-f23d-4efd-bf16-3796bc2a948d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.579647 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-logs\") pod \"nova-metadata-0\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.579680 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.579782 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzdvq\" (UniqueName: \"kubernetes.io/projected/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-kube-api-access-vzdvq\") pod \"nova-metadata-0\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.580659 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-logs\") pod \"nova-metadata-0\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.585817 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.591089 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.597701 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc31f1a-f23d-4efd-bf16-3796bc2a948d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc31f1a-f23d-4efd-bf16-3796bc2a948d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.598245 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc31f1a-f23d-4efd-bf16-3796bc2a948d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc31f1a-f23d-4efd-bf16-3796bc2a948d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.598967 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc31f1a-f23d-4efd-bf16-3796bc2a948d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc31f1a-f23d-4efd-bf16-3796bc2a948d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.599404 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-config-data\") pod \"nova-metadata-0\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.607481 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x4cp\" (UniqueName: \"kubernetes.io/projected/6fc31f1a-f23d-4efd-bf16-3796bc2a948d-kube-api-access-4x4cp\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc31f1a-f23d-4efd-bf16-3796bc2a948d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.608449 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzdvq\" (UniqueName: \"kubernetes.io/projected/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-kube-api-access-vzdvq\") pod \"nova-metadata-0\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.614099 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc31f1a-f23d-4efd-bf16-3796bc2a948d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc31f1a-f23d-4efd-bf16-3796bc2a948d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.665531 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.682221 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.829517 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82b37183-5b76-4014-aaad-d8356670e767" path="/var/lib/kubelet/pods/82b37183-5b76-4014-aaad-d8356670e767/volumes" Jan 26 15:59:17 crc kubenswrapper[4713]: I0126 15:59:17.830642 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5dc8ecd-562c-4e2c-be7b-aaf2b088c229" path="/var/lib/kubelet/pods/c5dc8ecd-562c-4e2c-be7b-aaf2b088c229/volumes" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.187356 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: W0126 15:59:18.236033 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e4fb14f_b1a8_4d60_8355_9f5b1c8bf4ea.slice/crio-b59f5b2bcbbf04e8c4b5636179823f857eec8358aa8ec3c55c052db13462df86 WatchSource:0}: Error finding container b59f5b2bcbbf04e8c4b5636179823f857eec8358aa8ec3c55c052db13462df86: Status 404 returned error can't find the container with id b59f5b2bcbbf04e8c4b5636179823f857eec8358aa8ec3c55c052db13462df86 Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.239298 4713 generic.go:334] "Generic (PLEG): container finished" podID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerID="a2a4fdf09660907b6c5af96cc093ce8e78520a816cd5159b7426a081e1cc46d6" exitCode=0 Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.239340 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fda03791-ed50-4db3-ab38-8bf1ec8d607d","Type":"ContainerDied","Data":"a2a4fdf09660907b6c5af96cc093ce8e78520a816cd5159b7426a081e1cc46d6"} Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.239385 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fda03791-ed50-4db3-ab38-8bf1ec8d607d","Type":"ContainerDied","Data":"e015eb261bbfe6266a78f838f8435a19a0c77f0153869b9a9440dc6ec98fb024"} Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.239409 4713 scope.go:117] "RemoveContainer" containerID="4ea2a33c54e8a47adebd1a7af432e934fc592c21480ed0afb4e9b361a8c5e599" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.239436 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.247675 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.273260 4713 scope.go:117] "RemoveContainer" containerID="fcc38ef85e06c17c154c2831df886a51dccf1efa2f92bba24af7abe9404108a1" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.296492 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52lwf\" (UniqueName: \"kubernetes.io/projected/fda03791-ed50-4db3-ab38-8bf1ec8d607d-kube-api-access-52lwf\") pod \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.296585 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fda03791-ed50-4db3-ab38-8bf1ec8d607d-log-httpd\") pod \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.296607 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-scripts\") pod \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.296658 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-sg-core-conf-yaml\") pod \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.296729 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-combined-ca-bundle\") pod \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.296856 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fda03791-ed50-4db3-ab38-8bf1ec8d607d-run-httpd\") pod \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.296880 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-config-data\") pod \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\" (UID: \"fda03791-ed50-4db3-ab38-8bf1ec8d607d\") " Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.299209 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fda03791-ed50-4db3-ab38-8bf1ec8d607d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fda03791-ed50-4db3-ab38-8bf1ec8d607d" (UID: "fda03791-ed50-4db3-ab38-8bf1ec8d607d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.299342 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fda03791-ed50-4db3-ab38-8bf1ec8d607d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fda03791-ed50-4db3-ab38-8bf1ec8d607d" (UID: "fda03791-ed50-4db3-ab38-8bf1ec8d607d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.300190 4713 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fda03791-ed50-4db3-ab38-8bf1ec8d607d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.300212 4713 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fda03791-ed50-4db3-ab38-8bf1ec8d607d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.303795 4713 scope.go:117] "RemoveContainer" containerID="a2a4fdf09660907b6c5af96cc093ce8e78520a816cd5159b7426a081e1cc46d6" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.304408 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda03791-ed50-4db3-ab38-8bf1ec8d607d-kube-api-access-52lwf" (OuterVolumeSpecName: "kube-api-access-52lwf") pod "fda03791-ed50-4db3-ab38-8bf1ec8d607d" (UID: "fda03791-ed50-4db3-ab38-8bf1ec8d607d"). InnerVolumeSpecName "kube-api-access-52lwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.304830 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-scripts" (OuterVolumeSpecName: "scripts") pod "fda03791-ed50-4db3-ab38-8bf1ec8d607d" (UID: "fda03791-ed50-4db3-ab38-8bf1ec8d607d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.332520 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fda03791-ed50-4db3-ab38-8bf1ec8d607d" (UID: "fda03791-ed50-4db3-ab38-8bf1ec8d607d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.339555 4713 scope.go:117] "RemoveContainer" containerID="d758fbd52089db58a182ef7c8b0dc5567ac3b57962ebcbfb8f8407df85611971" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.365587 4713 scope.go:117] "RemoveContainer" containerID="4ea2a33c54e8a47adebd1a7af432e934fc592c21480ed0afb4e9b361a8c5e599" Jan 26 15:59:18 crc kubenswrapper[4713]: E0126 15:59:18.365989 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ea2a33c54e8a47adebd1a7af432e934fc592c21480ed0afb4e9b361a8c5e599\": container with ID starting with 4ea2a33c54e8a47adebd1a7af432e934fc592c21480ed0afb4e9b361a8c5e599 not found: ID does not exist" containerID="4ea2a33c54e8a47adebd1a7af432e934fc592c21480ed0afb4e9b361a8c5e599" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.366030 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ea2a33c54e8a47adebd1a7af432e934fc592c21480ed0afb4e9b361a8c5e599"} err="failed to get container status \"4ea2a33c54e8a47adebd1a7af432e934fc592c21480ed0afb4e9b361a8c5e599\": rpc error: code = NotFound desc = could not find container \"4ea2a33c54e8a47adebd1a7af432e934fc592c21480ed0afb4e9b361a8c5e599\": container with ID starting with 4ea2a33c54e8a47adebd1a7af432e934fc592c21480ed0afb4e9b361a8c5e599 not found: ID does not exist" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.366053 4713 scope.go:117] "RemoveContainer" containerID="fcc38ef85e06c17c154c2831df886a51dccf1efa2f92bba24af7abe9404108a1" Jan 26 15:59:18 crc kubenswrapper[4713]: E0126 15:59:18.366240 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcc38ef85e06c17c154c2831df886a51dccf1efa2f92bba24af7abe9404108a1\": container with ID starting with fcc38ef85e06c17c154c2831df886a51dccf1efa2f92bba24af7abe9404108a1 not found: ID does not exist" containerID="fcc38ef85e06c17c154c2831df886a51dccf1efa2f92bba24af7abe9404108a1" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.366260 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcc38ef85e06c17c154c2831df886a51dccf1efa2f92bba24af7abe9404108a1"} err="failed to get container status \"fcc38ef85e06c17c154c2831df886a51dccf1efa2f92bba24af7abe9404108a1\": rpc error: code = NotFound desc = could not find container \"fcc38ef85e06c17c154c2831df886a51dccf1efa2f92bba24af7abe9404108a1\": container with ID starting with fcc38ef85e06c17c154c2831df886a51dccf1efa2f92bba24af7abe9404108a1 not found: ID does not exist" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.366273 4713 scope.go:117] "RemoveContainer" containerID="a2a4fdf09660907b6c5af96cc093ce8e78520a816cd5159b7426a081e1cc46d6" Jan 26 15:59:18 crc kubenswrapper[4713]: E0126 15:59:18.366494 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2a4fdf09660907b6c5af96cc093ce8e78520a816cd5159b7426a081e1cc46d6\": container with ID starting with a2a4fdf09660907b6c5af96cc093ce8e78520a816cd5159b7426a081e1cc46d6 not found: ID does not exist" containerID="a2a4fdf09660907b6c5af96cc093ce8e78520a816cd5159b7426a081e1cc46d6" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.366513 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2a4fdf09660907b6c5af96cc093ce8e78520a816cd5159b7426a081e1cc46d6"} err="failed to get container status \"a2a4fdf09660907b6c5af96cc093ce8e78520a816cd5159b7426a081e1cc46d6\": rpc error: code = NotFound desc = could not find container \"a2a4fdf09660907b6c5af96cc093ce8e78520a816cd5159b7426a081e1cc46d6\": container with ID starting with a2a4fdf09660907b6c5af96cc093ce8e78520a816cd5159b7426a081e1cc46d6 not found: ID does not exist" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.366527 4713 scope.go:117] "RemoveContainer" containerID="d758fbd52089db58a182ef7c8b0dc5567ac3b57962ebcbfb8f8407df85611971" Jan 26 15:59:18 crc kubenswrapper[4713]: E0126 15:59:18.366720 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d758fbd52089db58a182ef7c8b0dc5567ac3b57962ebcbfb8f8407df85611971\": container with ID starting with d758fbd52089db58a182ef7c8b0dc5567ac3b57962ebcbfb8f8407df85611971 not found: ID does not exist" containerID="d758fbd52089db58a182ef7c8b0dc5567ac3b57962ebcbfb8f8407df85611971" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.366741 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d758fbd52089db58a182ef7c8b0dc5567ac3b57962ebcbfb8f8407df85611971"} err="failed to get container status \"d758fbd52089db58a182ef7c8b0dc5567ac3b57962ebcbfb8f8407df85611971\": rpc error: code = NotFound desc = could not find container \"d758fbd52089db58a182ef7c8b0dc5567ac3b57962ebcbfb8f8407df85611971\": container with ID starting with d758fbd52089db58a182ef7c8b0dc5567ac3b57962ebcbfb8f8407df85611971 not found: ID does not exist" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.402008 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52lwf\" (UniqueName: \"kubernetes.io/projected/fda03791-ed50-4db3-ab38-8bf1ec8d607d-kube-api-access-52lwf\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.402039 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.402048 4713 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.420921 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fda03791-ed50-4db3-ab38-8bf1ec8d607d" (UID: "fda03791-ed50-4db3-ab38-8bf1ec8d607d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.426925 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.465081 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-config-data" (OuterVolumeSpecName: "config-data") pod "fda03791-ed50-4db3-ab38-8bf1ec8d607d" (UID: "fda03791-ed50-4db3-ab38-8bf1ec8d607d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.503987 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.504021 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda03791-ed50-4db3-ab38-8bf1ec8d607d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.580560 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.605745 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.618381 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:18 crc kubenswrapper[4713]: E0126 15:59:18.618801 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="proxy-httpd" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.618818 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="proxy-httpd" Jan 26 15:59:18 crc kubenswrapper[4713]: E0126 15:59:18.618836 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="ceilometer-notification-agent" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.618843 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="ceilometer-notification-agent" Jan 26 15:59:18 crc kubenswrapper[4713]: E0126 15:59:18.618867 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="ceilometer-central-agent" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.618874 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="ceilometer-central-agent" Jan 26 15:59:18 crc kubenswrapper[4713]: E0126 15:59:18.618884 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="sg-core" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.618892 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="sg-core" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.619081 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="proxy-httpd" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.619099 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="ceilometer-notification-agent" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.619128 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="ceilometer-central-agent" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.619135 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" containerName="sg-core" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.620992 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.624055 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.624246 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.624294 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.631343 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.707787 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.707871 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-config-data\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.707905 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-scripts\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.707921 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8f5b\" (UniqueName: \"kubernetes.io/projected/8d62e70a-7931-409e-a43e-0b1918e6b566-kube-api-access-h8f5b\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.707958 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d62e70a-7931-409e-a43e-0b1918e6b566-run-httpd\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.707997 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d62e70a-7931-409e-a43e-0b1918e6b566-log-httpd\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.708030 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.708074 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.810224 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.810286 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.810431 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-config-data\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.810476 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-scripts\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.810498 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8f5b\" (UniqueName: \"kubernetes.io/projected/8d62e70a-7931-409e-a43e-0b1918e6b566-kube-api-access-h8f5b\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.810548 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d62e70a-7931-409e-a43e-0b1918e6b566-run-httpd\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.810594 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d62e70a-7931-409e-a43e-0b1918e6b566-log-httpd\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.810635 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.811388 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d62e70a-7931-409e-a43e-0b1918e6b566-run-httpd\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.811484 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d62e70a-7931-409e-a43e-0b1918e6b566-log-httpd\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.815954 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.816743 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-scripts\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.817846 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.817902 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-config-data\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.835036 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:18 crc kubenswrapper[4713]: I0126 15:59:18.843218 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8f5b\" (UniqueName: \"kubernetes.io/projected/8d62e70a-7931-409e-a43e-0b1918e6b566-kube-api-access-h8f5b\") pod \"ceilometer-0\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " pod="openstack/ceilometer-0" Jan 26 15:59:19 crc kubenswrapper[4713]: I0126 15:59:19.018171 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:59:19 crc kubenswrapper[4713]: I0126 15:59:19.262573 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea","Type":"ContainerStarted","Data":"48b03233af495c6c4564c41874796a4971c58ca839fa35d85b31f9674beb943c"} Jan 26 15:59:19 crc kubenswrapper[4713]: I0126 15:59:19.262954 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea","Type":"ContainerStarted","Data":"73baf18d8b9e42e90d3dbf928a48014d14b031477f3966aacd025d0af3fcaffa"} Jan 26 15:59:19 crc kubenswrapper[4713]: I0126 15:59:19.262975 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea","Type":"ContainerStarted","Data":"b59f5b2bcbbf04e8c4b5636179823f857eec8358aa8ec3c55c052db13462df86"} Jan 26 15:59:19 crc kubenswrapper[4713]: I0126 15:59:19.266697 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6fc31f1a-f23d-4efd-bf16-3796bc2a948d","Type":"ContainerStarted","Data":"aab957b9279dab9e602b0e5d66340183c39184f40d0c665409777de7ed13b2aa"} Jan 26 15:59:19 crc kubenswrapper[4713]: I0126 15:59:19.266740 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6fc31f1a-f23d-4efd-bf16-3796bc2a948d","Type":"ContainerStarted","Data":"832432e5544bc4344fffdc02ee972dac4a4a4fdf7c96fe0988fcfddba8caed8a"} Jan 26 15:59:19 crc kubenswrapper[4713]: I0126 15:59:19.295062 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.295039911 podStartE2EDuration="2.295039911s" podCreationTimestamp="2026-01-26 15:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:19.285640587 +0000 UTC m=+1534.422657822" watchObservedRunningTime="2026-01-26 15:59:19.295039911 +0000 UTC m=+1534.432057136" Jan 26 15:59:19 crc kubenswrapper[4713]: I0126 15:59:19.315826 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.315807174 podStartE2EDuration="2.315807174s" podCreationTimestamp="2026-01-26 15:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:19.305432172 +0000 UTC m=+1534.442449407" watchObservedRunningTime="2026-01-26 15:59:19.315807174 +0000 UTC m=+1534.452824409" Jan 26 15:59:19 crc kubenswrapper[4713]: I0126 15:59:19.397923 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 15:59:19 crc kubenswrapper[4713]: I0126 15:59:19.399211 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 15:59:19 crc kubenswrapper[4713]: I0126 15:59:19.406390 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 15:59:19 crc kubenswrapper[4713]: I0126 15:59:19.406448 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 15:59:19 crc kubenswrapper[4713]: I0126 15:59:19.633005 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:19 crc kubenswrapper[4713]: I0126 15:59:19.816075 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda03791-ed50-4db3-ab38-8bf1ec8d607d" path="/var/lib/kubelet/pods/fda03791-ed50-4db3-ab38-8bf1ec8d607d/volumes" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.281870 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d62e70a-7931-409e-a43e-0b1918e6b566","Type":"ContainerStarted","Data":"987e9a846f1cf541da9bd4d6d7b26094837806102fdfd0c47afb3117cf0c8c25"} Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.282442 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.288273 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.483762 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-4k4wf"] Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.485960 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.494434 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-4k4wf"] Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.514004 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-config\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.514070 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.514092 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.514114 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.514141 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8wtx\" (UniqueName: \"kubernetes.io/projected/c56be499-f359-4178-a9b2-df69f97d684f-kube-api-access-g8wtx\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.514201 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.616039 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-config\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.616145 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.616175 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.616206 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.616240 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8wtx\" (UniqueName: \"kubernetes.io/projected/c56be499-f359-4178-a9b2-df69f97d684f-kube-api-access-g8wtx\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.616321 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.617544 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.617576 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-config\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.617601 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.617746 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.618305 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.655267 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8wtx\" (UniqueName: \"kubernetes.io/projected/c56be499-f359-4178-a9b2-df69f97d684f-kube-api-access-g8wtx\") pod \"dnsmasq-dns-5fd9b586ff-4k4wf\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:20 crc kubenswrapper[4713]: I0126 15:59:20.816751 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:21 crc kubenswrapper[4713]: I0126 15:59:21.296914 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d62e70a-7931-409e-a43e-0b1918e6b566","Type":"ContainerStarted","Data":"44521bc749c2fd182341e4057feed25803b16ee635d881f1e9ad3141a5fad4c1"} Jan 26 15:59:21 crc kubenswrapper[4713]: I0126 15:59:21.297213 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d62e70a-7931-409e-a43e-0b1918e6b566","Type":"ContainerStarted","Data":"2e709c349a45e8581f47562e2053ab113e9d48a59383b66bea504fffccee6446"} Jan 26 15:59:21 crc kubenswrapper[4713]: I0126 15:59:21.384309 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-4k4wf"] Jan 26 15:59:22 crc kubenswrapper[4713]: I0126 15:59:22.309187 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d62e70a-7931-409e-a43e-0b1918e6b566","Type":"ContainerStarted","Data":"05f42c54946efea16566011a76d475f04a81c4c7dbcf52d77a362d7e6778756d"} Jan 26 15:59:22 crc kubenswrapper[4713]: I0126 15:59:22.312382 4713 generic.go:334] "Generic (PLEG): container finished" podID="c56be499-f359-4178-a9b2-df69f97d684f" containerID="b81f3a20015a5c31a8cb137931e65d3e54e13e1e070290dd60ed4abdb77c55ba" exitCode=0 Jan 26 15:59:22 crc kubenswrapper[4713]: I0126 15:59:22.312467 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" event={"ID":"c56be499-f359-4178-a9b2-df69f97d684f","Type":"ContainerDied","Data":"b81f3a20015a5c31a8cb137931e65d3e54e13e1e070290dd60ed4abdb77c55ba"} Jan 26 15:59:22 crc kubenswrapper[4713]: I0126 15:59:22.312525 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" event={"ID":"c56be499-f359-4178-a9b2-df69f97d684f","Type":"ContainerStarted","Data":"e958173d66866986ad2124ee00fa4401bb7104f089b1ba5f0e0d03ac985d2f07"} Jan 26 15:59:22 crc kubenswrapper[4713]: I0126 15:59:22.665947 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:22 crc kubenswrapper[4713]: I0126 15:59:22.682864 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 15:59:22 crc kubenswrapper[4713]: I0126 15:59:22.683966 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 15:59:22 crc kubenswrapper[4713]: I0126 15:59:22.917844 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:23 crc kubenswrapper[4713]: I0126 15:59:23.141635 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:59:23 crc kubenswrapper[4713]: I0126 15:59:23.326525 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" event={"ID":"c56be499-f359-4178-a9b2-df69f97d684f","Type":"ContainerStarted","Data":"6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537"} Jan 26 15:59:23 crc kubenswrapper[4713]: I0126 15:59:23.326724 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b05981a0-34f5-4e73-936d-6c7d464cb13a" containerName="nova-api-api" containerID="cri-o://43434f8098423cf2657b7b1af7bf3e2f37a32ccb385420fba2638fd1fd8e4435" gracePeriod=30 Jan 26 15:59:23 crc kubenswrapper[4713]: I0126 15:59:23.326682 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b05981a0-34f5-4e73-936d-6c7d464cb13a" containerName="nova-api-log" containerID="cri-o://cd821c841a6826f49063f70ed81bcfbc021b011a8a943297810af970e2cf39fa" gracePeriod=30 Jan 26 15:59:23 crc kubenswrapper[4713]: I0126 15:59:23.326874 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:23 crc kubenswrapper[4713]: I0126 15:59:23.379133 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" podStartSLOduration=3.379112563 podStartE2EDuration="3.379112563s" podCreationTimestamp="2026-01-26 15:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:23.361832758 +0000 UTC m=+1538.498850013" watchObservedRunningTime="2026-01-26 15:59:23.379112563 +0000 UTC m=+1538.516129798" Jan 26 15:59:23 crc kubenswrapper[4713]: I0126 15:59:23.578749 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 15:59:24 crc kubenswrapper[4713]: I0126 15:59:24.337321 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d62e70a-7931-409e-a43e-0b1918e6b566","Type":"ContainerStarted","Data":"21266eeab50b9099e6c08e8436eb61b7d16327292a2859eb01f720d1f0b09f26"} Jan 26 15:59:24 crc kubenswrapper[4713]: I0126 15:59:24.337532 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="ceilometer-central-agent" containerID="cri-o://2e709c349a45e8581f47562e2053ab113e9d48a59383b66bea504fffccee6446" gracePeriod=30 Jan 26 15:59:24 crc kubenswrapper[4713]: I0126 15:59:24.337542 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="sg-core" containerID="cri-o://05f42c54946efea16566011a76d475f04a81c4c7dbcf52d77a362d7e6778756d" gracePeriod=30 Jan 26 15:59:24 crc kubenswrapper[4713]: I0126 15:59:24.337569 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="ceilometer-notification-agent" containerID="cri-o://44521bc749c2fd182341e4057feed25803b16ee635d881f1e9ad3141a5fad4c1" gracePeriod=30 Jan 26 15:59:24 crc kubenswrapper[4713]: I0126 15:59:24.337569 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="proxy-httpd" containerID="cri-o://21266eeab50b9099e6c08e8436eb61b7d16327292a2859eb01f720d1f0b09f26" gracePeriod=30 Jan 26 15:59:24 crc kubenswrapper[4713]: I0126 15:59:24.337760 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:59:24 crc kubenswrapper[4713]: I0126 15:59:24.341899 4713 generic.go:334] "Generic (PLEG): container finished" podID="b05981a0-34f5-4e73-936d-6c7d464cb13a" containerID="cd821c841a6826f49063f70ed81bcfbc021b011a8a943297810af970e2cf39fa" exitCode=143 Jan 26 15:59:24 crc kubenswrapper[4713]: I0126 15:59:24.342574 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b05981a0-34f5-4e73-936d-6c7d464cb13a","Type":"ContainerDied","Data":"cd821c841a6826f49063f70ed81bcfbc021b011a8a943297810af970e2cf39fa"} Jan 26 15:59:24 crc kubenswrapper[4713]: I0126 15:59:24.360919 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.696951384 podStartE2EDuration="6.360897724s" podCreationTimestamp="2026-01-26 15:59:18 +0000 UTC" firstStartedPulling="2026-01-26 15:59:19.637250224 +0000 UTC m=+1534.774267469" lastFinishedPulling="2026-01-26 15:59:23.301196554 +0000 UTC m=+1538.438213809" observedRunningTime="2026-01-26 15:59:24.357777026 +0000 UTC m=+1539.494794281" watchObservedRunningTime="2026-01-26 15:59:24.360897724 +0000 UTC m=+1539.497914959" Jan 26 15:59:25 crc kubenswrapper[4713]: I0126 15:59:25.352130 4713 generic.go:334] "Generic (PLEG): container finished" podID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerID="21266eeab50b9099e6c08e8436eb61b7d16327292a2859eb01f720d1f0b09f26" exitCode=0 Jan 26 15:59:25 crc kubenswrapper[4713]: I0126 15:59:25.352163 4713 generic.go:334] "Generic (PLEG): container finished" podID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerID="05f42c54946efea16566011a76d475f04a81c4c7dbcf52d77a362d7e6778756d" exitCode=2 Jan 26 15:59:25 crc kubenswrapper[4713]: I0126 15:59:25.352169 4713 generic.go:334] "Generic (PLEG): container finished" podID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerID="44521bc749c2fd182341e4057feed25803b16ee635d881f1e9ad3141a5fad4c1" exitCode=0 Jan 26 15:59:25 crc kubenswrapper[4713]: I0126 15:59:25.352191 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d62e70a-7931-409e-a43e-0b1918e6b566","Type":"ContainerDied","Data":"21266eeab50b9099e6c08e8436eb61b7d16327292a2859eb01f720d1f0b09f26"} Jan 26 15:59:25 crc kubenswrapper[4713]: I0126 15:59:25.352218 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d62e70a-7931-409e-a43e-0b1918e6b566","Type":"ContainerDied","Data":"05f42c54946efea16566011a76d475f04a81c4c7dbcf52d77a362d7e6778756d"} Jan 26 15:59:25 crc kubenswrapper[4713]: I0126 15:59:25.352231 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d62e70a-7931-409e-a43e-0b1918e6b566","Type":"ContainerDied","Data":"44521bc749c2fd182341e4057feed25803b16ee635d881f1e9ad3141a5fad4c1"} Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.028455 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.153046 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05981a0-34f5-4e73-936d-6c7d464cb13a-combined-ca-bundle\") pod \"b05981a0-34f5-4e73-936d-6c7d464cb13a\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.153178 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05981a0-34f5-4e73-936d-6c7d464cb13a-logs\") pod \"b05981a0-34f5-4e73-936d-6c7d464cb13a\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.153295 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05981a0-34f5-4e73-936d-6c7d464cb13a-config-data\") pod \"b05981a0-34f5-4e73-936d-6c7d464cb13a\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.153435 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25ngk\" (UniqueName: \"kubernetes.io/projected/b05981a0-34f5-4e73-936d-6c7d464cb13a-kube-api-access-25ngk\") pod \"b05981a0-34f5-4e73-936d-6c7d464cb13a\" (UID: \"b05981a0-34f5-4e73-936d-6c7d464cb13a\") " Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.153796 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05981a0-34f5-4e73-936d-6c7d464cb13a-logs" (OuterVolumeSpecName: "logs") pod "b05981a0-34f5-4e73-936d-6c7d464cb13a" (UID: "b05981a0-34f5-4e73-936d-6c7d464cb13a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.154064 4713 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05981a0-34f5-4e73-936d-6c7d464cb13a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.175687 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05981a0-34f5-4e73-936d-6c7d464cb13a-kube-api-access-25ngk" (OuterVolumeSpecName: "kube-api-access-25ngk") pod "b05981a0-34f5-4e73-936d-6c7d464cb13a" (UID: "b05981a0-34f5-4e73-936d-6c7d464cb13a"). InnerVolumeSpecName "kube-api-access-25ngk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.187747 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05981a0-34f5-4e73-936d-6c7d464cb13a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b05981a0-34f5-4e73-936d-6c7d464cb13a" (UID: "b05981a0-34f5-4e73-936d-6c7d464cb13a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.192621 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05981a0-34f5-4e73-936d-6c7d464cb13a-config-data" (OuterVolumeSpecName: "config-data") pod "b05981a0-34f5-4e73-936d-6c7d464cb13a" (UID: "b05981a0-34f5-4e73-936d-6c7d464cb13a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.256422 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05981a0-34f5-4e73-936d-6c7d464cb13a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.256467 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25ngk\" (UniqueName: \"kubernetes.io/projected/b05981a0-34f5-4e73-936d-6c7d464cb13a-kube-api-access-25ngk\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.256479 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05981a0-34f5-4e73-936d-6c7d464cb13a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.383398 4713 generic.go:334] "Generic (PLEG): container finished" podID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerID="2e709c349a45e8581f47562e2053ab113e9d48a59383b66bea504fffccee6446" exitCode=0 Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.383445 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d62e70a-7931-409e-a43e-0b1918e6b566","Type":"ContainerDied","Data":"2e709c349a45e8581f47562e2053ab113e9d48a59383b66bea504fffccee6446"} Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.386379 4713 generic.go:334] "Generic (PLEG): container finished" podID="b05981a0-34f5-4e73-936d-6c7d464cb13a" containerID="43434f8098423cf2657b7b1af7bf3e2f37a32ccb385420fba2638fd1fd8e4435" exitCode=0 Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.386410 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b05981a0-34f5-4e73-936d-6c7d464cb13a","Type":"ContainerDied","Data":"43434f8098423cf2657b7b1af7bf3e2f37a32ccb385420fba2638fd1fd8e4435"} Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.386430 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b05981a0-34f5-4e73-936d-6c7d464cb13a","Type":"ContainerDied","Data":"8b7ed8e1f19023ce5081c0ac141f400d5eaca89a6e9239e9786f0bc4daa53221"} Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.386455 4713 scope.go:117] "RemoveContainer" containerID="43434f8098423cf2657b7b1af7bf3e2f37a32ccb385420fba2638fd1fd8e4435" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.386593 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.432701 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.442100 4713 scope.go:117] "RemoveContainer" containerID="cd821c841a6826f49063f70ed81bcfbc021b011a8a943297810af970e2cf39fa" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.495554 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.522688 4713 scope.go:117] "RemoveContainer" containerID="43434f8098423cf2657b7b1af7bf3e2f37a32ccb385420fba2638fd1fd8e4435" Jan 26 15:59:27 crc kubenswrapper[4713]: E0126 15:59:27.528672 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43434f8098423cf2657b7b1af7bf3e2f37a32ccb385420fba2638fd1fd8e4435\": container with ID starting with 43434f8098423cf2657b7b1af7bf3e2f37a32ccb385420fba2638fd1fd8e4435 not found: ID does not exist" containerID="43434f8098423cf2657b7b1af7bf3e2f37a32ccb385420fba2638fd1fd8e4435" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.528727 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43434f8098423cf2657b7b1af7bf3e2f37a32ccb385420fba2638fd1fd8e4435"} err="failed to get container status \"43434f8098423cf2657b7b1af7bf3e2f37a32ccb385420fba2638fd1fd8e4435\": rpc error: code = NotFound desc = could not find container \"43434f8098423cf2657b7b1af7bf3e2f37a32ccb385420fba2638fd1fd8e4435\": container with ID starting with 43434f8098423cf2657b7b1af7bf3e2f37a32ccb385420fba2638fd1fd8e4435 not found: ID does not exist" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.528763 4713 scope.go:117] "RemoveContainer" containerID="cd821c841a6826f49063f70ed81bcfbc021b011a8a943297810af970e2cf39fa" Jan 26 15:59:27 crc kubenswrapper[4713]: E0126 15:59:27.529468 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd821c841a6826f49063f70ed81bcfbc021b011a8a943297810af970e2cf39fa\": container with ID starting with cd821c841a6826f49063f70ed81bcfbc021b011a8a943297810af970e2cf39fa not found: ID does not exist" containerID="cd821c841a6826f49063f70ed81bcfbc021b011a8a943297810af970e2cf39fa" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.529486 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd821c841a6826f49063f70ed81bcfbc021b011a8a943297810af970e2cf39fa"} err="failed to get container status \"cd821c841a6826f49063f70ed81bcfbc021b011a8a943297810af970e2cf39fa\": rpc error: code = NotFound desc = could not find container \"cd821c841a6826f49063f70ed81bcfbc021b011a8a943297810af970e2cf39fa\": container with ID starting with cd821c841a6826f49063f70ed81bcfbc021b011a8a943297810af970e2cf39fa not found: ID does not exist" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.568432 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 15:59:27 crc kubenswrapper[4713]: E0126 15:59:27.568945 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05981a0-34f5-4e73-936d-6c7d464cb13a" containerName="nova-api-api" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.568962 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05981a0-34f5-4e73-936d-6c7d464cb13a" containerName="nova-api-api" Jan 26 15:59:27 crc kubenswrapper[4713]: E0126 15:59:27.568979 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05981a0-34f5-4e73-936d-6c7d464cb13a" containerName="nova-api-log" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.568985 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05981a0-34f5-4e73-936d-6c7d464cb13a" containerName="nova-api-log" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.569208 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="b05981a0-34f5-4e73-936d-6c7d464cb13a" containerName="nova-api-log" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.569224 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="b05981a0-34f5-4e73-936d-6c7d464cb13a" containerName="nova-api-api" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.570345 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.574773 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.574924 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.575037 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.592970 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.665960 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.681912 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-public-tls-certs\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.681977 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s8xz\" (UniqueName: \"kubernetes.io/projected/660b109c-ac1c-4a1b-8e62-16d557c43498-kube-api-access-2s8xz\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.682033 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.682112 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-config-data\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.682420 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-internal-tls-certs\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.682630 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/660b109c-ac1c-4a1b-8e62-16d557c43498-logs\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.683104 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.683137 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.707706 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.784681 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-public-tls-certs\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.784721 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s8xz\" (UniqueName: \"kubernetes.io/projected/660b109c-ac1c-4a1b-8e62-16d557c43498-kube-api-access-2s8xz\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.784745 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.784780 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-config-data\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.784800 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-internal-tls-certs\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.784884 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/660b109c-ac1c-4a1b-8e62-16d557c43498-logs\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.788223 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/660b109c-ac1c-4a1b-8e62-16d557c43498-logs\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.790132 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-public-tls-certs\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.790429 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.790866 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-internal-tls-certs\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.801577 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s8xz\" (UniqueName: \"kubernetes.io/projected/660b109c-ac1c-4a1b-8e62-16d557c43498-kube-api-access-2s8xz\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.816508 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05981a0-34f5-4e73-936d-6c7d464cb13a" path="/var/lib/kubelet/pods/b05981a0-34f5-4e73-936d-6c7d464cb13a/volumes" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.819211 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-config-data\") pod \"nova-api-0\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " pod="openstack/nova-api-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.938513 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:59:27 crc kubenswrapper[4713]: I0126 15:59:27.953337 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.090442 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8f5b\" (UniqueName: \"kubernetes.io/projected/8d62e70a-7931-409e-a43e-0b1918e6b566-kube-api-access-h8f5b\") pod \"8d62e70a-7931-409e-a43e-0b1918e6b566\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.090524 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-config-data\") pod \"8d62e70a-7931-409e-a43e-0b1918e6b566\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.090562 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d62e70a-7931-409e-a43e-0b1918e6b566-log-httpd\") pod \"8d62e70a-7931-409e-a43e-0b1918e6b566\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.090625 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-scripts\") pod \"8d62e70a-7931-409e-a43e-0b1918e6b566\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.090665 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-ceilometer-tls-certs\") pod \"8d62e70a-7931-409e-a43e-0b1918e6b566\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.090698 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-sg-core-conf-yaml\") pod \"8d62e70a-7931-409e-a43e-0b1918e6b566\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.090723 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-combined-ca-bundle\") pod \"8d62e70a-7931-409e-a43e-0b1918e6b566\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.090792 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d62e70a-7931-409e-a43e-0b1918e6b566-run-httpd\") pod \"8d62e70a-7931-409e-a43e-0b1918e6b566\" (UID: \"8d62e70a-7931-409e-a43e-0b1918e6b566\") " Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.091689 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d62e70a-7931-409e-a43e-0b1918e6b566-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8d62e70a-7931-409e-a43e-0b1918e6b566" (UID: "8d62e70a-7931-409e-a43e-0b1918e6b566"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.096396 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d62e70a-7931-409e-a43e-0b1918e6b566-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8d62e70a-7931-409e-a43e-0b1918e6b566" (UID: "8d62e70a-7931-409e-a43e-0b1918e6b566"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.100781 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d62e70a-7931-409e-a43e-0b1918e6b566-kube-api-access-h8f5b" (OuterVolumeSpecName: "kube-api-access-h8f5b") pod "8d62e70a-7931-409e-a43e-0b1918e6b566" (UID: "8d62e70a-7931-409e-a43e-0b1918e6b566"). InnerVolumeSpecName "kube-api-access-h8f5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.101450 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-scripts" (OuterVolumeSpecName: "scripts") pod "8d62e70a-7931-409e-a43e-0b1918e6b566" (UID: "8d62e70a-7931-409e-a43e-0b1918e6b566"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.146312 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8d62e70a-7931-409e-a43e-0b1918e6b566" (UID: "8d62e70a-7931-409e-a43e-0b1918e6b566"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.194060 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8f5b\" (UniqueName: \"kubernetes.io/projected/8d62e70a-7931-409e-a43e-0b1918e6b566-kube-api-access-h8f5b\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.194098 4713 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d62e70a-7931-409e-a43e-0b1918e6b566-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.194107 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.194127 4713 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.194135 4713 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d62e70a-7931-409e-a43e-0b1918e6b566-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.221504 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8d62e70a-7931-409e-a43e-0b1918e6b566" (UID: "8d62e70a-7931-409e-a43e-0b1918e6b566"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.241530 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d62e70a-7931-409e-a43e-0b1918e6b566" (UID: "8d62e70a-7931-409e-a43e-0b1918e6b566"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.253091 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-config-data" (OuterVolumeSpecName: "config-data") pod "8d62e70a-7931-409e-a43e-0b1918e6b566" (UID: "8d62e70a-7931-409e-a43e-0b1918e6b566"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.296221 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.296298 4713 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.296313 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d62e70a-7931-409e-a43e-0b1918e6b566-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.400063 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d62e70a-7931-409e-a43e-0b1918e6b566","Type":"ContainerDied","Data":"987e9a846f1cf541da9bd4d6d7b26094837806102fdfd0c47afb3117cf0c8c25"} Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.400087 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.400121 4713 scope.go:117] "RemoveContainer" containerID="21266eeab50b9099e6c08e8436eb61b7d16327292a2859eb01f720d1f0b09f26" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.425063 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.436566 4713 scope.go:117] "RemoveContainer" containerID="05f42c54946efea16566011a76d475f04a81c4c7dbcf52d77a362d7e6778756d" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.445135 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.462581 4713 scope.go:117] "RemoveContainer" containerID="44521bc749c2fd182341e4057feed25803b16ee635d881f1e9ad3141a5fad4c1" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.472542 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.489944 4713 scope.go:117] "RemoveContainer" containerID="2e709c349a45e8581f47562e2053ab113e9d48a59383b66bea504fffccee6446" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.498711 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:28 crc kubenswrapper[4713]: E0126 15:59:28.499185 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="proxy-httpd" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.499204 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="proxy-httpd" Jan 26 15:59:28 crc kubenswrapper[4713]: E0126 15:59:28.499232 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="ceilometer-central-agent" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.499239 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="ceilometer-central-agent" Jan 26 15:59:28 crc kubenswrapper[4713]: E0126 15:59:28.499251 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="sg-core" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.499256 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="sg-core" Jan 26 15:59:28 crc kubenswrapper[4713]: E0126 15:59:28.499271 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="ceilometer-notification-agent" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.499277 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="ceilometer-notification-agent" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.499539 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="sg-core" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.499574 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="ceilometer-central-agent" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.499594 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="proxy-httpd" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.499606 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" containerName="ceilometer-notification-agent" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.501524 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.503807 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.505776 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.505828 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.540179 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.556435 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.602968 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-scripts\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.603046 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.603093 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.603185 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvrvq\" (UniqueName: \"kubernetes.io/projected/815a865f-eacd-4aa0-9c3f-f9bc23f62688-kube-api-access-rvrvq\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.603331 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-config-data\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.603375 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/815a865f-eacd-4aa0-9c3f-f9bc23f62688-run-httpd\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.603401 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/815a865f-eacd-4aa0-9c3f-f9bc23f62688-log-httpd\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.603435 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.699625 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-vzbkv"] Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.701051 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.704831 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-config-data\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.704861 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/815a865f-eacd-4aa0-9c3f-f9bc23f62688-run-httpd\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.704883 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/815a865f-eacd-4aa0-9c3f-f9bc23f62688-log-httpd\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.704909 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.704934 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-scripts\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.704959 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.704990 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.705057 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvrvq\" (UniqueName: \"kubernetes.io/projected/815a865f-eacd-4aa0-9c3f-f9bc23f62688-kube-api-access-rvrvq\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.706581 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/815a865f-eacd-4aa0-9c3f-f9bc23f62688-run-httpd\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.706911 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/815a865f-eacd-4aa0-9c3f-f9bc23f62688-log-httpd\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.712142 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vzbkv"] Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.712584 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.713294 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.713805 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.714025 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.741002 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-scripts\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.743262 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.749934 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.750110 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvrvq\" (UniqueName: \"kubernetes.io/projected/815a865f-eacd-4aa0-9c3f-f9bc23f62688-kube-api-access-rvrvq\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.750222 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.752238 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-config-data\") pod \"ceilometer-0\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.806191 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-scripts\") pod \"nova-cell1-cell-mapping-vzbkv\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.806252 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vzbkv\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.806322 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjlb\" (UniqueName: \"kubernetes.io/projected/4733cb21-61c8-40e3-af0a-7375dcc21851-kube-api-access-chjlb\") pod \"nova-cell1-cell-mapping-vzbkv\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.806428 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-config-data\") pod \"nova-cell1-cell-mapping-vzbkv\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.836807 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.907758 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjlb\" (UniqueName: \"kubernetes.io/projected/4733cb21-61c8-40e3-af0a-7375dcc21851-kube-api-access-chjlb\") pod \"nova-cell1-cell-mapping-vzbkv\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.908796 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-config-data\") pod \"nova-cell1-cell-mapping-vzbkv\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.908998 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-scripts\") pod \"nova-cell1-cell-mapping-vzbkv\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.909093 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vzbkv\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.915089 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vzbkv\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.915234 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-config-data\") pod \"nova-cell1-cell-mapping-vzbkv\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.915784 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-scripts\") pod \"nova-cell1-cell-mapping-vzbkv\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.928662 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjlb\" (UniqueName: \"kubernetes.io/projected/4733cb21-61c8-40e3-af0a-7375dcc21851-kube-api-access-chjlb\") pod \"nova-cell1-cell-mapping-vzbkv\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:28 crc kubenswrapper[4713]: I0126 15:59:28.946056 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:29 crc kubenswrapper[4713]: I0126 15:59:29.343490 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:29 crc kubenswrapper[4713]: I0126 15:59:29.436599 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"660b109c-ac1c-4a1b-8e62-16d557c43498","Type":"ContainerStarted","Data":"c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791"} Jan 26 15:59:29 crc kubenswrapper[4713]: I0126 15:59:29.436648 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"660b109c-ac1c-4a1b-8e62-16d557c43498","Type":"ContainerStarted","Data":"468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047"} Jan 26 15:59:29 crc kubenswrapper[4713]: I0126 15:59:29.436666 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"660b109c-ac1c-4a1b-8e62-16d557c43498","Type":"ContainerStarted","Data":"bc9d0dcb9934391bbb9745df5be4776c699a67ea89eb2da3aa7d872b3ff1c430"} Jan 26 15:59:29 crc kubenswrapper[4713]: I0126 15:59:29.442073 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"815a865f-eacd-4aa0-9c3f-f9bc23f62688","Type":"ContainerStarted","Data":"481cc8641857f66acf4738e446d49adfb8dc7eb47b24955ebba870563ef07e4f"} Jan 26 15:59:29 crc kubenswrapper[4713]: I0126 15:59:29.460890 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.4608692149999998 podStartE2EDuration="2.460869215s" podCreationTimestamp="2026-01-26 15:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:29.458922441 +0000 UTC m=+1544.595939676" watchObservedRunningTime="2026-01-26 15:59:29.460869215 +0000 UTC m=+1544.597886450" Jan 26 15:59:29 crc kubenswrapper[4713]: I0126 15:59:29.537423 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vzbkv"] Jan 26 15:59:29 crc kubenswrapper[4713]: I0126 15:59:29.815444 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d62e70a-7931-409e-a43e-0b1918e6b566" path="/var/lib/kubelet/pods/8d62e70a-7931-409e-a43e-0b1918e6b566/volumes" Jan 26 15:59:30 crc kubenswrapper[4713]: I0126 15:59:30.453744 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vzbkv" event={"ID":"4733cb21-61c8-40e3-af0a-7375dcc21851","Type":"ContainerStarted","Data":"9073c288aba187c168cc4a661ae13d21382ca8b11fc627d980fc774c707f1233"} Jan 26 15:59:30 crc kubenswrapper[4713]: I0126 15:59:30.454061 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vzbkv" event={"ID":"4733cb21-61c8-40e3-af0a-7375dcc21851","Type":"ContainerStarted","Data":"c6d96aa0ef2f77d274b02e8c4e97a5c702810c42e9bc9d46442bb7944b1473f7"} Jan 26 15:59:30 crc kubenswrapper[4713]: I0126 15:59:30.456589 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"815a865f-eacd-4aa0-9c3f-f9bc23f62688","Type":"ContainerStarted","Data":"4f858b7028291bca9ce5b7a04671c06065a19314121727d9f41f7a607eabb64e"} Jan 26 15:59:30 crc kubenswrapper[4713]: I0126 15:59:30.478041 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-vzbkv" podStartSLOduration=2.47802182 podStartE2EDuration="2.47802182s" podCreationTimestamp="2026-01-26 15:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:30.46947463 +0000 UTC m=+1545.606491865" watchObservedRunningTime="2026-01-26 15:59:30.47802182 +0000 UTC m=+1545.615039055" Jan 26 15:59:30 crc kubenswrapper[4713]: I0126 15:59:30.824030 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 15:59:30 crc kubenswrapper[4713]: I0126 15:59:30.923226 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-j6929"] Jan 26 15:59:30 crc kubenswrapper[4713]: I0126 15:59:30.923852 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cd565959-j6929" podUID="e0176622-8842-4855-8962-ad88abbdb1e5" containerName="dnsmasq-dns" containerID="cri-o://5d18f621f5b0c9f8e399aa5ca23ef63a8992ab1457d582b892fa529aa822c1b4" gracePeriod=10 Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.474027 4713 generic.go:334] "Generic (PLEG): container finished" podID="e0176622-8842-4855-8962-ad88abbdb1e5" containerID="5d18f621f5b0c9f8e399aa5ca23ef63a8992ab1457d582b892fa529aa822c1b4" exitCode=0 Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.474187 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-j6929" event={"ID":"e0176622-8842-4855-8962-ad88abbdb1e5","Type":"ContainerDied","Data":"5d18f621f5b0c9f8e399aa5ca23ef63a8992ab1457d582b892fa529aa822c1b4"} Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.474333 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-j6929" event={"ID":"e0176622-8842-4855-8962-ad88abbdb1e5","Type":"ContainerDied","Data":"6d45a93ea1d5c4aab34f2bfc83a25fb47c22a73702ef3c28d301c6c3c671481b"} Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.474345 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d45a93ea1d5c4aab34f2bfc83a25fb47c22a73702ef3c28d301c6c3c671481b" Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.476083 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"815a865f-eacd-4aa0-9c3f-f9bc23f62688","Type":"ContainerStarted","Data":"e8b6b31a4853c4a29cdb14ed8d6c7e5b9d35ab2c776f8d8a844a013212a42457"} Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.493138 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.603830 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-ovsdbserver-sb\") pod \"e0176622-8842-4855-8962-ad88abbdb1e5\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.603948 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-config\") pod \"e0176622-8842-4855-8962-ad88abbdb1e5\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.604030 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-ovsdbserver-nb\") pod \"e0176622-8842-4855-8962-ad88abbdb1e5\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.604101 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-dns-swift-storage-0\") pod \"e0176622-8842-4855-8962-ad88abbdb1e5\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.604298 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k27gv\" (UniqueName: \"kubernetes.io/projected/e0176622-8842-4855-8962-ad88abbdb1e5-kube-api-access-k27gv\") pod \"e0176622-8842-4855-8962-ad88abbdb1e5\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.604349 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-dns-svc\") pod \"e0176622-8842-4855-8962-ad88abbdb1e5\" (UID: \"e0176622-8842-4855-8962-ad88abbdb1e5\") " Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.626947 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0176622-8842-4855-8962-ad88abbdb1e5-kube-api-access-k27gv" (OuterVolumeSpecName: "kube-api-access-k27gv") pod "e0176622-8842-4855-8962-ad88abbdb1e5" (UID: "e0176622-8842-4855-8962-ad88abbdb1e5"). InnerVolumeSpecName "kube-api-access-k27gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.693407 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e0176622-8842-4855-8962-ad88abbdb1e5" (UID: "e0176622-8842-4855-8962-ad88abbdb1e5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.708616 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e0176622-8842-4855-8962-ad88abbdb1e5" (UID: "e0176622-8842-4855-8962-ad88abbdb1e5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.719695 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-config" (OuterVolumeSpecName: "config") pod "e0176622-8842-4855-8962-ad88abbdb1e5" (UID: "e0176622-8842-4855-8962-ad88abbdb1e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.735173 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k27gv\" (UniqueName: \"kubernetes.io/projected/e0176622-8842-4855-8962-ad88abbdb1e5-kube-api-access-k27gv\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.735212 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.735224 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.735234 4713 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.735478 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e0176622-8842-4855-8962-ad88abbdb1e5" (UID: "e0176622-8842-4855-8962-ad88abbdb1e5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.782626 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e0176622-8842-4855-8962-ad88abbdb1e5" (UID: "e0176622-8842-4855-8962-ad88abbdb1e5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.836757 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:31 crc kubenswrapper[4713]: I0126 15:59:31.836784 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0176622-8842-4855-8962-ad88abbdb1e5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:32 crc kubenswrapper[4713]: I0126 15:59:32.488695 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"815a865f-eacd-4aa0-9c3f-f9bc23f62688","Type":"ContainerStarted","Data":"11ba4be224f3f2e29aa14cb6b78afc797c34661d36cd8ae77f7afbfc944d4540"} Jan 26 15:59:32 crc kubenswrapper[4713]: I0126 15:59:32.488718 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-j6929" Jan 26 15:59:32 crc kubenswrapper[4713]: I0126 15:59:32.516342 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-j6929"] Jan 26 15:59:32 crc kubenswrapper[4713]: I0126 15:59:32.526097 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-j6929"] Jan 26 15:59:33 crc kubenswrapper[4713]: I0126 15:59:33.301050 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:59:33 crc kubenswrapper[4713]: I0126 15:59:33.301447 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:59:33 crc kubenswrapper[4713]: I0126 15:59:33.499859 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"815a865f-eacd-4aa0-9c3f-f9bc23f62688","Type":"ContainerStarted","Data":"557291bbab58014efc43a15767441aff9008ea46d063566282170d738630a28d"} Jan 26 15:59:33 crc kubenswrapper[4713]: I0126 15:59:33.501242 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:59:33 crc kubenswrapper[4713]: I0126 15:59:33.533274 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.747982041 podStartE2EDuration="5.533255089s" podCreationTimestamp="2026-01-26 15:59:28 +0000 UTC" firstStartedPulling="2026-01-26 15:59:29.353076997 +0000 UTC m=+1544.490094222" lastFinishedPulling="2026-01-26 15:59:33.138350025 +0000 UTC m=+1548.275367270" observedRunningTime="2026-01-26 15:59:33.527427815 +0000 UTC m=+1548.664445050" watchObservedRunningTime="2026-01-26 15:59:33.533255089 +0000 UTC m=+1548.670272324" Jan 26 15:59:33 crc kubenswrapper[4713]: I0126 15:59:33.829951 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0176622-8842-4855-8962-ad88abbdb1e5" path="/var/lib/kubelet/pods/e0176622-8842-4855-8962-ad88abbdb1e5/volumes" Jan 26 15:59:35 crc kubenswrapper[4713]: I0126 15:59:35.522326 4713 generic.go:334] "Generic (PLEG): container finished" podID="4733cb21-61c8-40e3-af0a-7375dcc21851" containerID="9073c288aba187c168cc4a661ae13d21382ca8b11fc627d980fc774c707f1233" exitCode=0 Jan 26 15:59:35 crc kubenswrapper[4713]: I0126 15:59:35.522409 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vzbkv" event={"ID":"4733cb21-61c8-40e3-af0a-7375dcc21851","Type":"ContainerDied","Data":"9073c288aba187c168cc4a661ae13d21382ca8b11fc627d980fc774c707f1233"} Jan 26 15:59:36 crc kubenswrapper[4713]: I0126 15:59:36.984023 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.039218 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-config-data\") pod \"4733cb21-61c8-40e3-af0a-7375dcc21851\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.039401 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-combined-ca-bundle\") pod \"4733cb21-61c8-40e3-af0a-7375dcc21851\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.039463 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chjlb\" (UniqueName: \"kubernetes.io/projected/4733cb21-61c8-40e3-af0a-7375dcc21851-kube-api-access-chjlb\") pod \"4733cb21-61c8-40e3-af0a-7375dcc21851\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.039497 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-scripts\") pod \"4733cb21-61c8-40e3-af0a-7375dcc21851\" (UID: \"4733cb21-61c8-40e3-af0a-7375dcc21851\") " Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.046400 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-scripts" (OuterVolumeSpecName: "scripts") pod "4733cb21-61c8-40e3-af0a-7375dcc21851" (UID: "4733cb21-61c8-40e3-af0a-7375dcc21851"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.049527 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4733cb21-61c8-40e3-af0a-7375dcc21851-kube-api-access-chjlb" (OuterVolumeSpecName: "kube-api-access-chjlb") pod "4733cb21-61c8-40e3-af0a-7375dcc21851" (UID: "4733cb21-61c8-40e3-af0a-7375dcc21851"). InnerVolumeSpecName "kube-api-access-chjlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.071489 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4733cb21-61c8-40e3-af0a-7375dcc21851" (UID: "4733cb21-61c8-40e3-af0a-7375dcc21851"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.073950 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-config-data" (OuterVolumeSpecName: "config-data") pod "4733cb21-61c8-40e3-af0a-7375dcc21851" (UID: "4733cb21-61c8-40e3-af0a-7375dcc21851"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.142299 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.142337 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.142357 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chjlb\" (UniqueName: \"kubernetes.io/projected/4733cb21-61c8-40e3-af0a-7375dcc21851-kube-api-access-chjlb\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.142384 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4733cb21-61c8-40e3-af0a-7375dcc21851-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.563686 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vzbkv" event={"ID":"4733cb21-61c8-40e3-af0a-7375dcc21851","Type":"ContainerDied","Data":"c6d96aa0ef2f77d274b02e8c4e97a5c702810c42e9bc9d46442bb7944b1473f7"} Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.563740 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6d96aa0ef2f77d274b02e8c4e97a5c702810c42e9bc9d46442bb7944b1473f7" Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.563798 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vzbkv" Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.694096 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.701333 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.704874 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.838969 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.839514 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="660b109c-ac1c-4a1b-8e62-16d557c43498" containerName="nova-api-log" containerID="cri-o://468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047" gracePeriod=30 Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.840015 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="660b109c-ac1c-4a1b-8e62-16d557c43498" containerName="nova-api-api" containerID="cri-o://c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791" gracePeriod=30 Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.855386 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.855643 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f47dedd0-c816-416f-a64b-aa5fb5674ea7" containerName="nova-scheduler-scheduler" containerID="cri-o://1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7" gracePeriod=30 Jan 26 15:59:37 crc kubenswrapper[4713]: I0126 15:59:37.880931 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.512149 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.572323 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-config-data\") pod \"660b109c-ac1c-4a1b-8e62-16d557c43498\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.572458 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s8xz\" (UniqueName: \"kubernetes.io/projected/660b109c-ac1c-4a1b-8e62-16d557c43498-kube-api-access-2s8xz\") pod \"660b109c-ac1c-4a1b-8e62-16d557c43498\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.572503 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-internal-tls-certs\") pod \"660b109c-ac1c-4a1b-8e62-16d557c43498\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.572542 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-public-tls-certs\") pod \"660b109c-ac1c-4a1b-8e62-16d557c43498\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.572570 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/660b109c-ac1c-4a1b-8e62-16d557c43498-logs\") pod \"660b109c-ac1c-4a1b-8e62-16d557c43498\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.572636 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-combined-ca-bundle\") pod \"660b109c-ac1c-4a1b-8e62-16d557c43498\" (UID: \"660b109c-ac1c-4a1b-8e62-16d557c43498\") " Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.577326 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/660b109c-ac1c-4a1b-8e62-16d557c43498-logs" (OuterVolumeSpecName: "logs") pod "660b109c-ac1c-4a1b-8e62-16d557c43498" (UID: "660b109c-ac1c-4a1b-8e62-16d557c43498"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.580449 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/660b109c-ac1c-4a1b-8e62-16d557c43498-kube-api-access-2s8xz" (OuterVolumeSpecName: "kube-api-access-2s8xz") pod "660b109c-ac1c-4a1b-8e62-16d557c43498" (UID: "660b109c-ac1c-4a1b-8e62-16d557c43498"). InnerVolumeSpecName "kube-api-access-2s8xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.583697 4713 generic.go:334] "Generic (PLEG): container finished" podID="660b109c-ac1c-4a1b-8e62-16d557c43498" containerID="c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791" exitCode=0 Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.583721 4713 generic.go:334] "Generic (PLEG): container finished" podID="660b109c-ac1c-4a1b-8e62-16d557c43498" containerID="468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047" exitCode=143 Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.583782 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.583841 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"660b109c-ac1c-4a1b-8e62-16d557c43498","Type":"ContainerDied","Data":"c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791"} Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.583913 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"660b109c-ac1c-4a1b-8e62-16d557c43498","Type":"ContainerDied","Data":"468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047"} Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.583933 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"660b109c-ac1c-4a1b-8e62-16d557c43498","Type":"ContainerDied","Data":"bc9d0dcb9934391bbb9745df5be4776c699a67ea89eb2da3aa7d872b3ff1c430"} Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.583962 4713 scope.go:117] "RemoveContainer" containerID="c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.594770 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.630507 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-config-data" (OuterVolumeSpecName: "config-data") pod "660b109c-ac1c-4a1b-8e62-16d557c43498" (UID: "660b109c-ac1c-4a1b-8e62-16d557c43498"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.648769 4713 scope.go:117] "RemoveContainer" containerID="468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.653646 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "660b109c-ac1c-4a1b-8e62-16d557c43498" (UID: "660b109c-ac1c-4a1b-8e62-16d557c43498"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.670444 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "660b109c-ac1c-4a1b-8e62-16d557c43498" (UID: "660b109c-ac1c-4a1b-8e62-16d557c43498"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.671843 4713 scope.go:117] "RemoveContainer" containerID="c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791" Jan 26 15:59:38 crc kubenswrapper[4713]: E0126 15:59:38.672380 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791\": container with ID starting with c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791 not found: ID does not exist" containerID="c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.672410 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791"} err="failed to get container status \"c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791\": rpc error: code = NotFound desc = could not find container \"c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791\": container with ID starting with c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791 not found: ID does not exist" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.672430 4713 scope.go:117] "RemoveContainer" containerID="468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047" Jan 26 15:59:38 crc kubenswrapper[4713]: E0126 15:59:38.673296 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047\": container with ID starting with 468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047 not found: ID does not exist" containerID="468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.673319 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047"} err="failed to get container status \"468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047\": rpc error: code = NotFound desc = could not find container \"468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047\": container with ID starting with 468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047 not found: ID does not exist" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.673333 4713 scope.go:117] "RemoveContainer" containerID="c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.674845 4713 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.674862 4713 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/660b109c-ac1c-4a1b-8e62-16d557c43498-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.674871 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.674880 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s8xz\" (UniqueName: \"kubernetes.io/projected/660b109c-ac1c-4a1b-8e62-16d557c43498-kube-api-access-2s8xz\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.674889 4713 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.685602 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791"} err="failed to get container status \"c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791\": rpc error: code = NotFound desc = could not find container \"c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791\": container with ID starting with c962fb1396a3a70020b756dffbef579b8a32af4ffa2ad8f58f87be4944196791 not found: ID does not exist" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.685652 4713 scope.go:117] "RemoveContainer" containerID="468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.686130 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047"} err="failed to get container status \"468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047\": rpc error: code = NotFound desc = could not find container \"468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047\": container with ID starting with 468ed736d4fa3634ad79488264c25d3907caeecec83efa6fc10408f7d27b0047 not found: ID does not exist" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.687712 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "660b109c-ac1c-4a1b-8e62-16d557c43498" (UID: "660b109c-ac1c-4a1b-8e62-16d557c43498"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.776660 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660b109c-ac1c-4a1b-8e62-16d557c43498-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.938021 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.973344 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.990652 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 15:59:38 crc kubenswrapper[4713]: E0126 15:59:38.991129 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0176622-8842-4855-8962-ad88abbdb1e5" containerName="init" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.991152 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0176622-8842-4855-8962-ad88abbdb1e5" containerName="init" Jan 26 15:59:38 crc kubenswrapper[4713]: E0126 15:59:38.991166 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4733cb21-61c8-40e3-af0a-7375dcc21851" containerName="nova-manage" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.991174 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="4733cb21-61c8-40e3-af0a-7375dcc21851" containerName="nova-manage" Jan 26 15:59:38 crc kubenswrapper[4713]: E0126 15:59:38.991195 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0176622-8842-4855-8962-ad88abbdb1e5" containerName="dnsmasq-dns" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.991204 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0176622-8842-4855-8962-ad88abbdb1e5" containerName="dnsmasq-dns" Jan 26 15:59:38 crc kubenswrapper[4713]: E0126 15:59:38.991220 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660b109c-ac1c-4a1b-8e62-16d557c43498" containerName="nova-api-api" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.991228 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="660b109c-ac1c-4a1b-8e62-16d557c43498" containerName="nova-api-api" Jan 26 15:59:38 crc kubenswrapper[4713]: E0126 15:59:38.991245 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660b109c-ac1c-4a1b-8e62-16d557c43498" containerName="nova-api-log" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.991253 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="660b109c-ac1c-4a1b-8e62-16d557c43498" containerName="nova-api-log" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.991537 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0176622-8842-4855-8962-ad88abbdb1e5" containerName="dnsmasq-dns" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.991561 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="4733cb21-61c8-40e3-af0a-7375dcc21851" containerName="nova-manage" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.991570 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="660b109c-ac1c-4a1b-8e62-16d557c43498" containerName="nova-api-api" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.991587 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="660b109c-ac1c-4a1b-8e62-16d557c43498" containerName="nova-api-log" Jan 26 15:59:38 crc kubenswrapper[4713]: I0126 15:59:38.996114 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.001133 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.001307 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.001440 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.002537 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.082662 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54c69470-fd8d-4553-a1d3-4db65c424a2f-logs\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.083001 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c69470-fd8d-4553-a1d3-4db65c424a2f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.083158 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c69470-fd8d-4553-a1d3-4db65c424a2f-config-data\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.083213 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54c69470-fd8d-4553-a1d3-4db65c424a2f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.190024 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c69470-fd8d-4553-a1d3-4db65c424a2f-config-data\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.190159 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54c69470-fd8d-4553-a1d3-4db65c424a2f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.190356 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54c69470-fd8d-4553-a1d3-4db65c424a2f-public-tls-certs\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.190491 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54c69470-fd8d-4553-a1d3-4db65c424a2f-logs\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.190692 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwcg\" (UniqueName: \"kubernetes.io/projected/54c69470-fd8d-4553-a1d3-4db65c424a2f-kube-api-access-xmwcg\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.190934 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c69470-fd8d-4553-a1d3-4db65c424a2f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.198229 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54c69470-fd8d-4553-a1d3-4db65c424a2f-logs\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.199063 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c69470-fd8d-4553-a1d3-4db65c424a2f-config-data\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.204544 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54c69470-fd8d-4553-a1d3-4db65c424a2f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.209135 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c69470-fd8d-4553-a1d3-4db65c424a2f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.292112 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54c69470-fd8d-4553-a1d3-4db65c424a2f-public-tls-certs\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.292224 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmwcg\" (UniqueName: \"kubernetes.io/projected/54c69470-fd8d-4553-a1d3-4db65c424a2f-kube-api-access-xmwcg\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.297681 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54c69470-fd8d-4553-a1d3-4db65c424a2f-public-tls-certs\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.309724 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmwcg\" (UniqueName: \"kubernetes.io/projected/54c69470-fd8d-4553-a1d3-4db65c424a2f-kube-api-access-xmwcg\") pod \"nova-api-0\" (UID: \"54c69470-fd8d-4553-a1d3-4db65c424a2f\") " pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.350992 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:59:39 crc kubenswrapper[4713]: E0126 15:59:39.368516 4713 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 15:59:39 crc kubenswrapper[4713]: E0126 15:59:39.374333 4713 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.384821 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-phhrc"] Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.388677 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.393192 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdp95\" (UniqueName: \"kubernetes.io/projected/3f2894f0-cc63-40f8-870e-99e90830a491-kube-api-access-mdp95\") pod \"redhat-marketplace-phhrc\" (UID: \"3f2894f0-cc63-40f8-870e-99e90830a491\") " pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.393241 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2894f0-cc63-40f8-870e-99e90830a491-catalog-content\") pod \"redhat-marketplace-phhrc\" (UID: \"3f2894f0-cc63-40f8-870e-99e90830a491\") " pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.393298 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2894f0-cc63-40f8-870e-99e90830a491-utilities\") pod \"redhat-marketplace-phhrc\" (UID: \"3f2894f0-cc63-40f8-870e-99e90830a491\") " pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:39 crc kubenswrapper[4713]: E0126 15:59:39.400534 4713 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 15:59:39 crc kubenswrapper[4713]: E0126 15:59:39.400621 4713 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f47dedd0-c816-416f-a64b-aa5fb5674ea7" containerName="nova-scheduler-scheduler" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.418729 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-phhrc"] Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.495599 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdp95\" (UniqueName: \"kubernetes.io/projected/3f2894f0-cc63-40f8-870e-99e90830a491-kube-api-access-mdp95\") pod \"redhat-marketplace-phhrc\" (UID: \"3f2894f0-cc63-40f8-870e-99e90830a491\") " pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.495655 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2894f0-cc63-40f8-870e-99e90830a491-catalog-content\") pod \"redhat-marketplace-phhrc\" (UID: \"3f2894f0-cc63-40f8-870e-99e90830a491\") " pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.495706 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2894f0-cc63-40f8-870e-99e90830a491-utilities\") pod \"redhat-marketplace-phhrc\" (UID: \"3f2894f0-cc63-40f8-870e-99e90830a491\") " pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.496377 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2894f0-cc63-40f8-870e-99e90830a491-utilities\") pod \"redhat-marketplace-phhrc\" (UID: \"3f2894f0-cc63-40f8-870e-99e90830a491\") " pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.498358 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2894f0-cc63-40f8-870e-99e90830a491-catalog-content\") pod \"redhat-marketplace-phhrc\" (UID: \"3f2894f0-cc63-40f8-870e-99e90830a491\") " pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.556098 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdp95\" (UniqueName: \"kubernetes.io/projected/3f2894f0-cc63-40f8-870e-99e90830a491-kube-api-access-mdp95\") pod \"redhat-marketplace-phhrc\" (UID: \"3f2894f0-cc63-40f8-870e-99e90830a491\") " pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.637034 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" containerName="nova-metadata-log" containerID="cri-o://73baf18d8b9e42e90d3dbf928a48014d14b031477f3966aacd025d0af3fcaffa" gracePeriod=30 Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.638322 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" containerName="nova-metadata-metadata" containerID="cri-o://48b03233af495c6c4564c41874796a4971c58ca839fa35d85b31f9674beb943c" gracePeriod=30 Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.837723 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="660b109c-ac1c-4a1b-8e62-16d557c43498" path="/var/lib/kubelet/pods/660b109c-ac1c-4a1b-8e62-16d557c43498/volumes" Jan 26 15:59:39 crc kubenswrapper[4713]: I0126 15:59:39.849107 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:40 crc kubenswrapper[4713]: I0126 15:59:40.001876 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:59:40 crc kubenswrapper[4713]: W0126 15:59:40.315920 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f2894f0_cc63_40f8_870e_99e90830a491.slice/crio-3b41e2762176b5f9d58299cdf4b2f193b33c6644310312640cc6fbf25374bc23 WatchSource:0}: Error finding container 3b41e2762176b5f9d58299cdf4b2f193b33c6644310312640cc6fbf25374bc23: Status 404 returned error can't find the container with id 3b41e2762176b5f9d58299cdf4b2f193b33c6644310312640cc6fbf25374bc23 Jan 26 15:59:40 crc kubenswrapper[4713]: I0126 15:59:40.317681 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-phhrc"] Jan 26 15:59:40 crc kubenswrapper[4713]: I0126 15:59:40.648304 4713 generic.go:334] "Generic (PLEG): container finished" podID="3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" containerID="73baf18d8b9e42e90d3dbf928a48014d14b031477f3966aacd025d0af3fcaffa" exitCode=143 Jan 26 15:59:40 crc kubenswrapper[4713]: I0126 15:59:40.648384 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea","Type":"ContainerDied","Data":"73baf18d8b9e42e90d3dbf928a48014d14b031477f3966aacd025d0af3fcaffa"} Jan 26 15:59:40 crc kubenswrapper[4713]: I0126 15:59:40.650856 4713 generic.go:334] "Generic (PLEG): container finished" podID="3f2894f0-cc63-40f8-870e-99e90830a491" containerID="93de66ad91338ce84b77a4e314c338e6b1bcc85e43645ff5051911b06f0bb97a" exitCode=0 Jan 26 15:59:40 crc kubenswrapper[4713]: I0126 15:59:40.650901 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phhrc" event={"ID":"3f2894f0-cc63-40f8-870e-99e90830a491","Type":"ContainerDied","Data":"93de66ad91338ce84b77a4e314c338e6b1bcc85e43645ff5051911b06f0bb97a"} Jan 26 15:59:40 crc kubenswrapper[4713]: I0126 15:59:40.650941 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phhrc" event={"ID":"3f2894f0-cc63-40f8-870e-99e90830a491","Type":"ContainerStarted","Data":"3b41e2762176b5f9d58299cdf4b2f193b33c6644310312640cc6fbf25374bc23"} Jan 26 15:59:40 crc kubenswrapper[4713]: I0126 15:59:40.653152 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54c69470-fd8d-4553-a1d3-4db65c424a2f","Type":"ContainerStarted","Data":"001aa456cc2b976fa7f82bbd371a755eb1427449c15f924479be47b5a4fa9650"} Jan 26 15:59:40 crc kubenswrapper[4713]: I0126 15:59:40.653216 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54c69470-fd8d-4553-a1d3-4db65c424a2f","Type":"ContainerStarted","Data":"97d7497fa774ab9b5b42d85ab1b3180cad770a400a5f461a0bc2aa92b91d6a0b"} Jan 26 15:59:40 crc kubenswrapper[4713]: I0126 15:59:40.653233 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54c69470-fd8d-4553-a1d3-4db65c424a2f","Type":"ContainerStarted","Data":"b51cd45fd0dc3eca882cba336b921a2727356d3a4085dbea164d9249da2dc6a9"} Jan 26 15:59:40 crc kubenswrapper[4713]: I0126 15:59:40.701076 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.701053971 podStartE2EDuration="2.701053971s" podCreationTimestamp="2026-01-26 15:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:40.699521628 +0000 UTC m=+1555.836538853" watchObservedRunningTime="2026-01-26 15:59:40.701053971 +0000 UTC m=+1555.838071206" Jan 26 15:59:41 crc kubenswrapper[4713]: I0126 15:59:41.667271 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phhrc" event={"ID":"3f2894f0-cc63-40f8-870e-99e90830a491","Type":"ContainerStarted","Data":"2000a2f9c460c1f0fdae96280c60adae58bb6fa962797b5595e453e6f05145e6"} Jan 26 15:59:42 crc kubenswrapper[4713]: I0126 15:59:42.704676 4713 generic.go:334] "Generic (PLEG): container finished" podID="3f2894f0-cc63-40f8-870e-99e90830a491" containerID="2000a2f9c460c1f0fdae96280c60adae58bb6fa962797b5595e453e6f05145e6" exitCode=0 Jan 26 15:59:42 crc kubenswrapper[4713]: I0126 15:59:42.704997 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phhrc" event={"ID":"3f2894f0-cc63-40f8-870e-99e90830a491","Type":"ContainerDied","Data":"2000a2f9c460c1f0fdae96280c60adae58bb6fa962797b5595e453e6f05145e6"} Jan 26 15:59:42 crc kubenswrapper[4713]: I0126 15:59:42.772471 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": read tcp 10.217.0.2:52856->10.217.0.223:8775: read: connection reset by peer" Jan 26 15:59:42 crc kubenswrapper[4713]: I0126 15:59:42.772477 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": read tcp 10.217.0.2:52866->10.217.0.223:8775: read: connection reset by peer" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.520233 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.610083 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-nova-metadata-tls-certs\") pod \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.610120 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-combined-ca-bundle\") pod \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.610166 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-config-data\") pod \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.610224 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzdvq\" (UniqueName: \"kubernetes.io/projected/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-kube-api-access-vzdvq\") pod \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.610242 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-logs\") pod \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\" (UID: \"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea\") " Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.610763 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-logs" (OuterVolumeSpecName: "logs") pod "3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" (UID: "3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.611035 4713 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.615288 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-kube-api-access-vzdvq" (OuterVolumeSpecName: "kube-api-access-vzdvq") pod "3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" (UID: "3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea"). InnerVolumeSpecName "kube-api-access-vzdvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.651224 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.651459 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-config-data" (OuterVolumeSpecName: "config-data") pod "3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" (UID: "3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.687461 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" (UID: "3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.694402 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" (UID: "3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.712567 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzdvq\" (UniqueName: \"kubernetes.io/projected/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-kube-api-access-vzdvq\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.712597 4713 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.712610 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.712621 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.725870 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phhrc" event={"ID":"3f2894f0-cc63-40f8-870e-99e90830a491","Type":"ContainerStarted","Data":"29a56ed1c65d19eb64ce3dc705f7e3122030b13187d60ce290a1a5e3447ee327"} Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.736321 4713 generic.go:334] "Generic (PLEG): container finished" podID="f47dedd0-c816-416f-a64b-aa5fb5674ea7" containerID="1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7" exitCode=0 Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.736509 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.736527 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f47dedd0-c816-416f-a64b-aa5fb5674ea7","Type":"ContainerDied","Data":"1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7"} Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.737205 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f47dedd0-c816-416f-a64b-aa5fb5674ea7","Type":"ContainerDied","Data":"821d5b677e9a301c283ac821866f8e20c5be0464bb8865d6bce666e092de20ec"} Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.737273 4713 scope.go:117] "RemoveContainer" containerID="1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.743694 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-phhrc" podStartSLOduration=2.121866493 podStartE2EDuration="4.743672146s" podCreationTimestamp="2026-01-26 15:59:39 +0000 UTC" firstStartedPulling="2026-01-26 15:59:40.652866148 +0000 UTC m=+1555.789883393" lastFinishedPulling="2026-01-26 15:59:43.274671821 +0000 UTC m=+1558.411689046" observedRunningTime="2026-01-26 15:59:43.740494137 +0000 UTC m=+1558.877511372" watchObservedRunningTime="2026-01-26 15:59:43.743672146 +0000 UTC m=+1558.880689381" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.744861 4713 generic.go:334] "Generic (PLEG): container finished" podID="3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" containerID="48b03233af495c6c4564c41874796a4971c58ca839fa35d85b31f9674beb943c" exitCode=0 Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.744905 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea","Type":"ContainerDied","Data":"48b03233af495c6c4564c41874796a4971c58ca839fa35d85b31f9674beb943c"} Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.744933 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea","Type":"ContainerDied","Data":"b59f5b2bcbbf04e8c4b5636179823f857eec8358aa8ec3c55c052db13462df86"} Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.744992 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.780941 4713 scope.go:117] "RemoveContainer" containerID="1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7" Jan 26 15:59:43 crc kubenswrapper[4713]: E0126 15:59:43.782438 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7\": container with ID starting with 1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7 not found: ID does not exist" containerID="1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.782476 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7"} err="failed to get container status \"1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7\": rpc error: code = NotFound desc = could not find container \"1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7\": container with ID starting with 1f1c62aa0d5484ddd5b216ec6022192f268436e7d48c6217f39c22d84d2e8cd7 not found: ID does not exist" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.782500 4713 scope.go:117] "RemoveContainer" containerID="48b03233af495c6c4564c41874796a4971c58ca839fa35d85b31f9674beb943c" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.782618 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.801446 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.813640 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f47dedd0-c816-416f-a64b-aa5fb5674ea7-combined-ca-bundle\") pod \"f47dedd0-c816-416f-a64b-aa5fb5674ea7\" (UID: \"f47dedd0-c816-416f-a64b-aa5fb5674ea7\") " Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.813780 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95fwh\" (UniqueName: \"kubernetes.io/projected/f47dedd0-c816-416f-a64b-aa5fb5674ea7-kube-api-access-95fwh\") pod \"f47dedd0-c816-416f-a64b-aa5fb5674ea7\" (UID: \"f47dedd0-c816-416f-a64b-aa5fb5674ea7\") " Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.814022 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f47dedd0-c816-416f-a64b-aa5fb5674ea7-config-data\") pod \"f47dedd0-c816-416f-a64b-aa5fb5674ea7\" (UID: \"f47dedd0-c816-416f-a64b-aa5fb5674ea7\") " Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.817699 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f47dedd0-c816-416f-a64b-aa5fb5674ea7-kube-api-access-95fwh" (OuterVolumeSpecName: "kube-api-access-95fwh") pod "f47dedd0-c816-416f-a64b-aa5fb5674ea7" (UID: "f47dedd0-c816-416f-a64b-aa5fb5674ea7"). InnerVolumeSpecName "kube-api-access-95fwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.818969 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" path="/var/lib/kubelet/pods/3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea/volumes" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.822239 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:59:43 crc kubenswrapper[4713]: E0126 15:59:43.822659 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" containerName="nova-metadata-metadata" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.822682 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" containerName="nova-metadata-metadata" Jan 26 15:59:43 crc kubenswrapper[4713]: E0126 15:59:43.822725 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" containerName="nova-metadata-log" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.822733 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" containerName="nova-metadata-log" Jan 26 15:59:43 crc kubenswrapper[4713]: E0126 15:59:43.822758 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f47dedd0-c816-416f-a64b-aa5fb5674ea7" containerName="nova-scheduler-scheduler" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.822766 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f47dedd0-c816-416f-a64b-aa5fb5674ea7" containerName="nova-scheduler-scheduler" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.824942 4713 scope.go:117] "RemoveContainer" containerID="73baf18d8b9e42e90d3dbf928a48014d14b031477f3966aacd025d0af3fcaffa" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.829648 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" containerName="nova-metadata-metadata" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.829696 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4fb14f-b1a8-4d60-8355-9f5b1c8bf4ea" containerName="nova-metadata-log" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.829715 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f47dedd0-c816-416f-a64b-aa5fb5674ea7" containerName="nova-scheduler-scheduler" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.831648 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.834543 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.834687 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.843517 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.853770 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f47dedd0-c816-416f-a64b-aa5fb5674ea7-config-data" (OuterVolumeSpecName: "config-data") pod "f47dedd0-c816-416f-a64b-aa5fb5674ea7" (UID: "f47dedd0-c816-416f-a64b-aa5fb5674ea7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.855459 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f47dedd0-c816-416f-a64b-aa5fb5674ea7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f47dedd0-c816-416f-a64b-aa5fb5674ea7" (UID: "f47dedd0-c816-416f-a64b-aa5fb5674ea7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.864511 4713 scope.go:117] "RemoveContainer" containerID="48b03233af495c6c4564c41874796a4971c58ca839fa35d85b31f9674beb943c" Jan 26 15:59:43 crc kubenswrapper[4713]: E0126 15:59:43.864849 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b03233af495c6c4564c41874796a4971c58ca839fa35d85b31f9674beb943c\": container with ID starting with 48b03233af495c6c4564c41874796a4971c58ca839fa35d85b31f9674beb943c not found: ID does not exist" containerID="48b03233af495c6c4564c41874796a4971c58ca839fa35d85b31f9674beb943c" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.864876 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b03233af495c6c4564c41874796a4971c58ca839fa35d85b31f9674beb943c"} err="failed to get container status \"48b03233af495c6c4564c41874796a4971c58ca839fa35d85b31f9674beb943c\": rpc error: code = NotFound desc = could not find container \"48b03233af495c6c4564c41874796a4971c58ca839fa35d85b31f9674beb943c\": container with ID starting with 48b03233af495c6c4564c41874796a4971c58ca839fa35d85b31f9674beb943c not found: ID does not exist" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.864898 4713 scope.go:117] "RemoveContainer" containerID="73baf18d8b9e42e90d3dbf928a48014d14b031477f3966aacd025d0af3fcaffa" Jan 26 15:59:43 crc kubenswrapper[4713]: E0126 15:59:43.865300 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73baf18d8b9e42e90d3dbf928a48014d14b031477f3966aacd025d0af3fcaffa\": container with ID starting with 73baf18d8b9e42e90d3dbf928a48014d14b031477f3966aacd025d0af3fcaffa not found: ID does not exist" containerID="73baf18d8b9e42e90d3dbf928a48014d14b031477f3966aacd025d0af3fcaffa" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.865317 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73baf18d8b9e42e90d3dbf928a48014d14b031477f3966aacd025d0af3fcaffa"} err="failed to get container status \"73baf18d8b9e42e90d3dbf928a48014d14b031477f3966aacd025d0af3fcaffa\": rpc error: code = NotFound desc = could not find container \"73baf18d8b9e42e90d3dbf928a48014d14b031477f3966aacd025d0af3fcaffa\": container with ID starting with 73baf18d8b9e42e90d3dbf928a48014d14b031477f3966aacd025d0af3fcaffa not found: ID does not exist" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.916502 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f47dedd0-c816-416f-a64b-aa5fb5674ea7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.916547 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95fwh\" (UniqueName: \"kubernetes.io/projected/f47dedd0-c816-416f-a64b-aa5fb5674ea7-kube-api-access-95fwh\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:43 crc kubenswrapper[4713]: I0126 15:59:43.916783 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f47dedd0-c816-416f-a64b-aa5fb5674ea7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.018570 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc59197a-2a96-4fe1-a320-f285fb456203-config-data\") pod \"nova-metadata-0\" (UID: \"fc59197a-2a96-4fe1-a320-f285fb456203\") " pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.018636 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc59197a-2a96-4fe1-a320-f285fb456203-logs\") pod \"nova-metadata-0\" (UID: \"fc59197a-2a96-4fe1-a320-f285fb456203\") " pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.018738 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc59197a-2a96-4fe1-a320-f285fb456203-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fc59197a-2a96-4fe1-a320-f285fb456203\") " pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.018813 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc59197a-2a96-4fe1-a320-f285fb456203-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fc59197a-2a96-4fe1-a320-f285fb456203\") " pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.018848 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2ftr\" (UniqueName: \"kubernetes.io/projected/fc59197a-2a96-4fe1-a320-f285fb456203-kube-api-access-h2ftr\") pod \"nova-metadata-0\" (UID: \"fc59197a-2a96-4fe1-a320-f285fb456203\") " pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.068246 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.077926 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.093220 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.094999 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.098391 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.120823 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc59197a-2a96-4fe1-a320-f285fb456203-config-data\") pod \"nova-metadata-0\" (UID: \"fc59197a-2a96-4fe1-a320-f285fb456203\") " pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.120886 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc59197a-2a96-4fe1-a320-f285fb456203-logs\") pod \"nova-metadata-0\" (UID: \"fc59197a-2a96-4fe1-a320-f285fb456203\") " pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.121010 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc59197a-2a96-4fe1-a320-f285fb456203-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fc59197a-2a96-4fe1-a320-f285fb456203\") " pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.121093 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc59197a-2a96-4fe1-a320-f285fb456203-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fc59197a-2a96-4fe1-a320-f285fb456203\") " pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.121132 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2ftr\" (UniqueName: \"kubernetes.io/projected/fc59197a-2a96-4fe1-a320-f285fb456203-kube-api-access-h2ftr\") pod \"nova-metadata-0\" (UID: \"fc59197a-2a96-4fe1-a320-f285fb456203\") " pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.121209 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.123053 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc59197a-2a96-4fe1-a320-f285fb456203-logs\") pod \"nova-metadata-0\" (UID: \"fc59197a-2a96-4fe1-a320-f285fb456203\") " pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.125023 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc59197a-2a96-4fe1-a320-f285fb456203-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fc59197a-2a96-4fe1-a320-f285fb456203\") " pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.129231 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc59197a-2a96-4fe1-a320-f285fb456203-config-data\") pod \"nova-metadata-0\" (UID: \"fc59197a-2a96-4fe1-a320-f285fb456203\") " pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.129318 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc59197a-2a96-4fe1-a320-f285fb456203-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fc59197a-2a96-4fe1-a320-f285fb456203\") " pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.136628 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2ftr\" (UniqueName: \"kubernetes.io/projected/fc59197a-2a96-4fe1-a320-f285fb456203-kube-api-access-h2ftr\") pod \"nova-metadata-0\" (UID: \"fc59197a-2a96-4fe1-a320-f285fb456203\") " pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.222920 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f768fe-d211-44f1-9341-6d596fc18452-config-data\") pod \"nova-scheduler-0\" (UID: \"86f768fe-d211-44f1-9341-6d596fc18452\") " pod="openstack/nova-scheduler-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.223129 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grlj2\" (UniqueName: \"kubernetes.io/projected/86f768fe-d211-44f1-9341-6d596fc18452-kube-api-access-grlj2\") pod \"nova-scheduler-0\" (UID: \"86f768fe-d211-44f1-9341-6d596fc18452\") " pod="openstack/nova-scheduler-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.223182 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f768fe-d211-44f1-9341-6d596fc18452-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"86f768fe-d211-44f1-9341-6d596fc18452\") " pod="openstack/nova-scheduler-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.228272 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.325570 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f768fe-d211-44f1-9341-6d596fc18452-config-data\") pod \"nova-scheduler-0\" (UID: \"86f768fe-d211-44f1-9341-6d596fc18452\") " pod="openstack/nova-scheduler-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.325967 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grlj2\" (UniqueName: \"kubernetes.io/projected/86f768fe-d211-44f1-9341-6d596fc18452-kube-api-access-grlj2\") pod \"nova-scheduler-0\" (UID: \"86f768fe-d211-44f1-9341-6d596fc18452\") " pod="openstack/nova-scheduler-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.326010 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f768fe-d211-44f1-9341-6d596fc18452-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"86f768fe-d211-44f1-9341-6d596fc18452\") " pod="openstack/nova-scheduler-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.330349 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f768fe-d211-44f1-9341-6d596fc18452-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"86f768fe-d211-44f1-9341-6d596fc18452\") " pod="openstack/nova-scheduler-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.333773 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f768fe-d211-44f1-9341-6d596fc18452-config-data\") pod \"nova-scheduler-0\" (UID: \"86f768fe-d211-44f1-9341-6d596fc18452\") " pod="openstack/nova-scheduler-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.349588 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grlj2\" (UniqueName: \"kubernetes.io/projected/86f768fe-d211-44f1-9341-6d596fc18452-kube-api-access-grlj2\") pod \"nova-scheduler-0\" (UID: \"86f768fe-d211-44f1-9341-6d596fc18452\") " pod="openstack/nova-scheduler-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.411160 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.704721 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:59:44 crc kubenswrapper[4713]: W0126 15:59:44.707309 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc59197a_2a96_4fe1_a320_f285fb456203.slice/crio-d59f87ee2b2ece624314a37668a982bd437721352aa0ed0e87f01b8110e7c062 WatchSource:0}: Error finding container d59f87ee2b2ece624314a37668a982bd437721352aa0ed0e87f01b8110e7c062: Status 404 returned error can't find the container with id d59f87ee2b2ece624314a37668a982bd437721352aa0ed0e87f01b8110e7c062 Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.780947 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fc59197a-2a96-4fe1-a320-f285fb456203","Type":"ContainerStarted","Data":"d59f87ee2b2ece624314a37668a982bd437721352aa0ed0e87f01b8110e7c062"} Jan 26 15:59:44 crc kubenswrapper[4713]: I0126 15:59:44.893897 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:59:44 crc kubenswrapper[4713]: W0126 15:59:44.897582 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86f768fe_d211_44f1_9341_6d596fc18452.slice/crio-d0274f8a01ae467807afa7d197f2e4be2857c057b7c1d6c7cf8543a0dd58b00a WatchSource:0}: Error finding container d0274f8a01ae467807afa7d197f2e4be2857c057b7c1d6c7cf8543a0dd58b00a: Status 404 returned error can't find the container with id d0274f8a01ae467807afa7d197f2e4be2857c057b7c1d6c7cf8543a0dd58b00a Jan 26 15:59:46 crc kubenswrapper[4713]: I0126 15:59:46.007419 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f47dedd0-c816-416f-a64b-aa5fb5674ea7" path="/var/lib/kubelet/pods/f47dedd0-c816-416f-a64b-aa5fb5674ea7/volumes" Jan 26 15:59:46 crc kubenswrapper[4713]: I0126 15:59:46.009207 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fc59197a-2a96-4fe1-a320-f285fb456203","Type":"ContainerStarted","Data":"e000af60036e7913367d6967cfd14450b1b46122a2a74d09f1aab1efef858cb8"} Jan 26 15:59:46 crc kubenswrapper[4713]: I0126 15:59:46.009681 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fc59197a-2a96-4fe1-a320-f285fb456203","Type":"ContainerStarted","Data":"1ccb8ff2fa3d2df1f99a695c9597812cabb2386465621a9f6eb8c55d0085a9e6"} Jan 26 15:59:46 crc kubenswrapper[4713]: I0126 15:59:46.009767 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"86f768fe-d211-44f1-9341-6d596fc18452","Type":"ContainerStarted","Data":"14c66a74bad88c6b5c468683dcd48e4033b46b99f8a719fce6ea82ae486e4bd1"} Jan 26 15:59:46 crc kubenswrapper[4713]: I0126 15:59:46.009847 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"86f768fe-d211-44f1-9341-6d596fc18452","Type":"ContainerStarted","Data":"d0274f8a01ae467807afa7d197f2e4be2857c057b7c1d6c7cf8543a0dd58b00a"} Jan 26 15:59:46 crc kubenswrapper[4713]: I0126 15:59:46.069407 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.069379111 podStartE2EDuration="3.069379111s" podCreationTimestamp="2026-01-26 15:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:46.051521549 +0000 UTC m=+1561.188538804" watchObservedRunningTime="2026-01-26 15:59:46.069379111 +0000 UTC m=+1561.206396356" Jan 26 15:59:46 crc kubenswrapper[4713]: I0126 15:59:46.096920 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.096889184 podStartE2EDuration="2.096889184s" podCreationTimestamp="2026-01-26 15:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:46.070499373 +0000 UTC m=+1561.207516608" watchObservedRunningTime="2026-01-26 15:59:46.096889184 +0000 UTC m=+1561.233906419" Jan 26 15:59:49 crc kubenswrapper[4713]: I0126 15:59:49.229507 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 15:59:49 crc kubenswrapper[4713]: I0126 15:59:49.230795 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 15:59:49 crc kubenswrapper[4713]: I0126 15:59:49.351942 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 15:59:49 crc kubenswrapper[4713]: I0126 15:59:49.352258 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 15:59:49 crc kubenswrapper[4713]: I0126 15:59:49.412153 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 15:59:49 crc kubenswrapper[4713]: I0126 15:59:49.850128 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:49 crc kubenswrapper[4713]: I0126 15:59:49.850445 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:49 crc kubenswrapper[4713]: I0126 15:59:49.930273 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:50 crc kubenswrapper[4713]: I0126 15:59:50.093536 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:50 crc kubenswrapper[4713]: I0126 15:59:50.171590 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-phhrc"] Jan 26 15:59:50 crc kubenswrapper[4713]: I0126 15:59:50.367603 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="54c69470-fd8d-4553-a1d3-4db65c424a2f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:59:50 crc kubenswrapper[4713]: I0126 15:59:50.367608 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="54c69470-fd8d-4553-a1d3-4db65c424a2f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:59:52 crc kubenswrapper[4713]: I0126 15:59:52.062945 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-phhrc" podUID="3f2894f0-cc63-40f8-870e-99e90830a491" containerName="registry-server" containerID="cri-o://29a56ed1c65d19eb64ce3dc705f7e3122030b13187d60ce290a1a5e3447ee327" gracePeriod=2 Jan 26 15:59:52 crc kubenswrapper[4713]: I0126 15:59:52.711839 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:52 crc kubenswrapper[4713]: I0126 15:59:52.871593 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2894f0-cc63-40f8-870e-99e90830a491-utilities\") pod \"3f2894f0-cc63-40f8-870e-99e90830a491\" (UID: \"3f2894f0-cc63-40f8-870e-99e90830a491\") " Jan 26 15:59:52 crc kubenswrapper[4713]: I0126 15:59:52.871680 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2894f0-cc63-40f8-870e-99e90830a491-catalog-content\") pod \"3f2894f0-cc63-40f8-870e-99e90830a491\" (UID: \"3f2894f0-cc63-40f8-870e-99e90830a491\") " Jan 26 15:59:52 crc kubenswrapper[4713]: I0126 15:59:52.871756 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdp95\" (UniqueName: \"kubernetes.io/projected/3f2894f0-cc63-40f8-870e-99e90830a491-kube-api-access-mdp95\") pod \"3f2894f0-cc63-40f8-870e-99e90830a491\" (UID: \"3f2894f0-cc63-40f8-870e-99e90830a491\") " Jan 26 15:59:52 crc kubenswrapper[4713]: I0126 15:59:52.873222 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f2894f0-cc63-40f8-870e-99e90830a491-utilities" (OuterVolumeSpecName: "utilities") pod "3f2894f0-cc63-40f8-870e-99e90830a491" (UID: "3f2894f0-cc63-40f8-870e-99e90830a491"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:59:52 crc kubenswrapper[4713]: I0126 15:59:52.880608 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f2894f0-cc63-40f8-870e-99e90830a491-kube-api-access-mdp95" (OuterVolumeSpecName: "kube-api-access-mdp95") pod "3f2894f0-cc63-40f8-870e-99e90830a491" (UID: "3f2894f0-cc63-40f8-870e-99e90830a491"). InnerVolumeSpecName "kube-api-access-mdp95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:52 crc kubenswrapper[4713]: I0126 15:59:52.892858 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f2894f0-cc63-40f8-870e-99e90830a491-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f2894f0-cc63-40f8-870e-99e90830a491" (UID: "3f2894f0-cc63-40f8-870e-99e90830a491"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:59:52 crc kubenswrapper[4713]: I0126 15:59:52.975413 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2894f0-cc63-40f8-870e-99e90830a491-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:52 crc kubenswrapper[4713]: I0126 15:59:52.975446 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2894f0-cc63-40f8-870e-99e90830a491-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:52 crc kubenswrapper[4713]: I0126 15:59:52.975458 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdp95\" (UniqueName: \"kubernetes.io/projected/3f2894f0-cc63-40f8-870e-99e90830a491-kube-api-access-mdp95\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.076280 4713 generic.go:334] "Generic (PLEG): container finished" podID="3f2894f0-cc63-40f8-870e-99e90830a491" containerID="29a56ed1c65d19eb64ce3dc705f7e3122030b13187d60ce290a1a5e3447ee327" exitCode=0 Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.076352 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phhrc" Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.076400 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phhrc" event={"ID":"3f2894f0-cc63-40f8-870e-99e90830a491","Type":"ContainerDied","Data":"29a56ed1c65d19eb64ce3dc705f7e3122030b13187d60ce290a1a5e3447ee327"} Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.076820 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phhrc" event={"ID":"3f2894f0-cc63-40f8-870e-99e90830a491","Type":"ContainerDied","Data":"3b41e2762176b5f9d58299cdf4b2f193b33c6644310312640cc6fbf25374bc23"} Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.076853 4713 scope.go:117] "RemoveContainer" containerID="29a56ed1c65d19eb64ce3dc705f7e3122030b13187d60ce290a1a5e3447ee327" Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.099153 4713 scope.go:117] "RemoveContainer" containerID="2000a2f9c460c1f0fdae96280c60adae58bb6fa962797b5595e453e6f05145e6" Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.122134 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-phhrc"] Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.130935 4713 scope.go:117] "RemoveContainer" containerID="93de66ad91338ce84b77a4e314c338e6b1bcc85e43645ff5051911b06f0bb97a" Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.135006 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-phhrc"] Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.171763 4713 scope.go:117] "RemoveContainer" containerID="29a56ed1c65d19eb64ce3dc705f7e3122030b13187d60ce290a1a5e3447ee327" Jan 26 15:59:53 crc kubenswrapper[4713]: E0126 15:59:53.172305 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29a56ed1c65d19eb64ce3dc705f7e3122030b13187d60ce290a1a5e3447ee327\": container with ID starting with 29a56ed1c65d19eb64ce3dc705f7e3122030b13187d60ce290a1a5e3447ee327 not found: ID does not exist" containerID="29a56ed1c65d19eb64ce3dc705f7e3122030b13187d60ce290a1a5e3447ee327" Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.172337 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a56ed1c65d19eb64ce3dc705f7e3122030b13187d60ce290a1a5e3447ee327"} err="failed to get container status \"29a56ed1c65d19eb64ce3dc705f7e3122030b13187d60ce290a1a5e3447ee327\": rpc error: code = NotFound desc = could not find container \"29a56ed1c65d19eb64ce3dc705f7e3122030b13187d60ce290a1a5e3447ee327\": container with ID starting with 29a56ed1c65d19eb64ce3dc705f7e3122030b13187d60ce290a1a5e3447ee327 not found: ID does not exist" Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.172427 4713 scope.go:117] "RemoveContainer" containerID="2000a2f9c460c1f0fdae96280c60adae58bb6fa962797b5595e453e6f05145e6" Jan 26 15:59:53 crc kubenswrapper[4713]: E0126 15:59:53.175956 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2000a2f9c460c1f0fdae96280c60adae58bb6fa962797b5595e453e6f05145e6\": container with ID starting with 2000a2f9c460c1f0fdae96280c60adae58bb6fa962797b5595e453e6f05145e6 not found: ID does not exist" containerID="2000a2f9c460c1f0fdae96280c60adae58bb6fa962797b5595e453e6f05145e6" Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.175992 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2000a2f9c460c1f0fdae96280c60adae58bb6fa962797b5595e453e6f05145e6"} err="failed to get container status \"2000a2f9c460c1f0fdae96280c60adae58bb6fa962797b5595e453e6f05145e6\": rpc error: code = NotFound desc = could not find container \"2000a2f9c460c1f0fdae96280c60adae58bb6fa962797b5595e453e6f05145e6\": container with ID starting with 2000a2f9c460c1f0fdae96280c60adae58bb6fa962797b5595e453e6f05145e6 not found: ID does not exist" Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.176013 4713 scope.go:117] "RemoveContainer" containerID="93de66ad91338ce84b77a4e314c338e6b1bcc85e43645ff5051911b06f0bb97a" Jan 26 15:59:53 crc kubenswrapper[4713]: E0126 15:59:53.176442 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93de66ad91338ce84b77a4e314c338e6b1bcc85e43645ff5051911b06f0bb97a\": container with ID starting with 93de66ad91338ce84b77a4e314c338e6b1bcc85e43645ff5051911b06f0bb97a not found: ID does not exist" containerID="93de66ad91338ce84b77a4e314c338e6b1bcc85e43645ff5051911b06f0bb97a" Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.176461 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93de66ad91338ce84b77a4e314c338e6b1bcc85e43645ff5051911b06f0bb97a"} err="failed to get container status \"93de66ad91338ce84b77a4e314c338e6b1bcc85e43645ff5051911b06f0bb97a\": rpc error: code = NotFound desc = could not find container \"93de66ad91338ce84b77a4e314c338e6b1bcc85e43645ff5051911b06f0bb97a\": container with ID starting with 93de66ad91338ce84b77a4e314c338e6b1bcc85e43645ff5051911b06f0bb97a not found: ID does not exist" Jan 26 15:59:53 crc kubenswrapper[4713]: I0126 15:59:53.818039 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f2894f0-cc63-40f8-870e-99e90830a491" path="/var/lib/kubelet/pods/3f2894f0-cc63-40f8-870e-99e90830a491/volumes" Jan 26 15:59:54 crc kubenswrapper[4713]: I0126 15:59:54.228553 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 15:59:54 crc kubenswrapper[4713]: I0126 15:59:54.228886 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 15:59:54 crc kubenswrapper[4713]: I0126 15:59:54.412599 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 15:59:54 crc kubenswrapper[4713]: I0126 15:59:54.444083 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 15:59:55 crc kubenswrapper[4713]: I0126 15:59:55.140009 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 15:59:55 crc kubenswrapper[4713]: I0126 15:59:55.251650 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fc59197a-2a96-4fe1-a320-f285fb456203" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.231:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:59:55 crc kubenswrapper[4713]: I0126 15:59:55.251673 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fc59197a-2a96-4fe1-a320-f285fb456203" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.231:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:59:58 crc kubenswrapper[4713]: I0126 15:59:58.850564 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 15:59:59 crc kubenswrapper[4713]: I0126 15:59:59.363195 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 15:59:59 crc kubenswrapper[4713]: I0126 15:59:59.363292 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 15:59:59 crc kubenswrapper[4713]: I0126 15:59:59.363734 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 15:59:59 crc kubenswrapper[4713]: I0126 15:59:59.363759 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 15:59:59 crc kubenswrapper[4713]: I0126 15:59:59.370452 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 15:59:59 crc kubenswrapper[4713]: I0126 15:59:59.374625 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.168509 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp"] Jan 26 16:00:00 crc kubenswrapper[4713]: E0126 16:00:00.169318 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2894f0-cc63-40f8-870e-99e90830a491" containerName="extract-content" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.169333 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2894f0-cc63-40f8-870e-99e90830a491" containerName="extract-content" Jan 26 16:00:00 crc kubenswrapper[4713]: E0126 16:00:00.169373 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2894f0-cc63-40f8-870e-99e90830a491" containerName="extract-utilities" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.169380 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2894f0-cc63-40f8-870e-99e90830a491" containerName="extract-utilities" Jan 26 16:00:00 crc kubenswrapper[4713]: E0126 16:00:00.169403 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2894f0-cc63-40f8-870e-99e90830a491" containerName="registry-server" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.169411 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2894f0-cc63-40f8-870e-99e90830a491" containerName="registry-server" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.169622 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f2894f0-cc63-40f8-870e-99e90830a491" containerName="registry-server" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.170432 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.173229 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.173571 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.189537 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp"] Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.243715 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/814d989f-aaa7-4c73-8192-f7bc58d0be57-config-volume\") pod \"collect-profiles-29490720-lpnkp\" (UID: \"814d989f-aaa7-4c73-8192-f7bc58d0be57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.243792 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn8vg\" (UniqueName: \"kubernetes.io/projected/814d989f-aaa7-4c73-8192-f7bc58d0be57-kube-api-access-xn8vg\") pod \"collect-profiles-29490720-lpnkp\" (UID: \"814d989f-aaa7-4c73-8192-f7bc58d0be57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.243958 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/814d989f-aaa7-4c73-8192-f7bc58d0be57-secret-volume\") pod \"collect-profiles-29490720-lpnkp\" (UID: \"814d989f-aaa7-4c73-8192-f7bc58d0be57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.345656 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/814d989f-aaa7-4c73-8192-f7bc58d0be57-config-volume\") pod \"collect-profiles-29490720-lpnkp\" (UID: \"814d989f-aaa7-4c73-8192-f7bc58d0be57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.345733 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn8vg\" (UniqueName: \"kubernetes.io/projected/814d989f-aaa7-4c73-8192-f7bc58d0be57-kube-api-access-xn8vg\") pod \"collect-profiles-29490720-lpnkp\" (UID: \"814d989f-aaa7-4c73-8192-f7bc58d0be57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.345832 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/814d989f-aaa7-4c73-8192-f7bc58d0be57-secret-volume\") pod \"collect-profiles-29490720-lpnkp\" (UID: \"814d989f-aaa7-4c73-8192-f7bc58d0be57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.347188 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/814d989f-aaa7-4c73-8192-f7bc58d0be57-config-volume\") pod \"collect-profiles-29490720-lpnkp\" (UID: \"814d989f-aaa7-4c73-8192-f7bc58d0be57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.352445 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/814d989f-aaa7-4c73-8192-f7bc58d0be57-secret-volume\") pod \"collect-profiles-29490720-lpnkp\" (UID: \"814d989f-aaa7-4c73-8192-f7bc58d0be57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.369131 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn8vg\" (UniqueName: \"kubernetes.io/projected/814d989f-aaa7-4c73-8192-f7bc58d0be57-kube-api-access-xn8vg\") pod \"collect-profiles-29490720-lpnkp\" (UID: \"814d989f-aaa7-4c73-8192-f7bc58d0be57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" Jan 26 16:00:00 crc kubenswrapper[4713]: I0126 16:00:00.497042 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" Jan 26 16:00:01 crc kubenswrapper[4713]: I0126 16:00:01.001662 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp"] Jan 26 16:00:01 crc kubenswrapper[4713]: W0126 16:00:01.007041 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod814d989f_aaa7_4c73_8192_f7bc58d0be57.slice/crio-4f4c18abaca6b538191a65d39a38981eb64a7b7fcbf6124e8476bde552facfeb WatchSource:0}: Error finding container 4f4c18abaca6b538191a65d39a38981eb64a7b7fcbf6124e8476bde552facfeb: Status 404 returned error can't find the container with id 4f4c18abaca6b538191a65d39a38981eb64a7b7fcbf6124e8476bde552facfeb Jan 26 16:00:01 crc kubenswrapper[4713]: I0126 16:00:01.187086 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" event={"ID":"814d989f-aaa7-4c73-8192-f7bc58d0be57","Type":"ContainerStarted","Data":"1e57b09d527e44ff691616d4a6188660d879cca9a6921b2071bc2f61354dcdef"} Jan 26 16:00:01 crc kubenswrapper[4713]: I0126 16:00:01.188593 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" event={"ID":"814d989f-aaa7-4c73-8192-f7bc58d0be57","Type":"ContainerStarted","Data":"4f4c18abaca6b538191a65d39a38981eb64a7b7fcbf6124e8476bde552facfeb"} Jan 26 16:00:01 crc kubenswrapper[4713]: I0126 16:00:01.220686 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" podStartSLOduration=1.22065774 podStartE2EDuration="1.22065774s" podCreationTimestamp="2026-01-26 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:01.215961698 +0000 UTC m=+1576.352978933" watchObservedRunningTime="2026-01-26 16:00:01.22065774 +0000 UTC m=+1576.357674975" Jan 26 16:00:02 crc kubenswrapper[4713]: I0126 16:00:02.200287 4713 generic.go:334] "Generic (PLEG): container finished" podID="814d989f-aaa7-4c73-8192-f7bc58d0be57" containerID="1e57b09d527e44ff691616d4a6188660d879cca9a6921b2071bc2f61354dcdef" exitCode=0 Jan 26 16:00:02 crc kubenswrapper[4713]: I0126 16:00:02.200351 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" event={"ID":"814d989f-aaa7-4c73-8192-f7bc58d0be57","Type":"ContainerDied","Data":"1e57b09d527e44ff691616d4a6188660d879cca9a6921b2071bc2f61354dcdef"} Jan 26 16:00:03 crc kubenswrapper[4713]: I0126 16:00:03.301423 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:00:03 crc kubenswrapper[4713]: I0126 16:00:03.301851 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:00:03 crc kubenswrapper[4713]: I0126 16:00:03.751846 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" Jan 26 16:00:03 crc kubenswrapper[4713]: I0126 16:00:03.821750 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn8vg\" (UniqueName: \"kubernetes.io/projected/814d989f-aaa7-4c73-8192-f7bc58d0be57-kube-api-access-xn8vg\") pod \"814d989f-aaa7-4c73-8192-f7bc58d0be57\" (UID: \"814d989f-aaa7-4c73-8192-f7bc58d0be57\") " Jan 26 16:00:03 crc kubenswrapper[4713]: I0126 16:00:03.821867 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/814d989f-aaa7-4c73-8192-f7bc58d0be57-secret-volume\") pod \"814d989f-aaa7-4c73-8192-f7bc58d0be57\" (UID: \"814d989f-aaa7-4c73-8192-f7bc58d0be57\") " Jan 26 16:00:03 crc kubenswrapper[4713]: I0126 16:00:03.822246 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/814d989f-aaa7-4c73-8192-f7bc58d0be57-config-volume\") pod \"814d989f-aaa7-4c73-8192-f7bc58d0be57\" (UID: \"814d989f-aaa7-4c73-8192-f7bc58d0be57\") " Jan 26 16:00:03 crc kubenswrapper[4713]: I0126 16:00:03.831687 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814d989f-aaa7-4c73-8192-f7bc58d0be57-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "814d989f-aaa7-4c73-8192-f7bc58d0be57" (UID: "814d989f-aaa7-4c73-8192-f7bc58d0be57"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:03 crc kubenswrapper[4713]: I0126 16:00:03.837653 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/814d989f-aaa7-4c73-8192-f7bc58d0be57-config-volume" (OuterVolumeSpecName: "config-volume") pod "814d989f-aaa7-4c73-8192-f7bc58d0be57" (UID: "814d989f-aaa7-4c73-8192-f7bc58d0be57"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:03 crc kubenswrapper[4713]: I0126 16:00:03.848574 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/814d989f-aaa7-4c73-8192-f7bc58d0be57-kube-api-access-xn8vg" (OuterVolumeSpecName: "kube-api-access-xn8vg") pod "814d989f-aaa7-4c73-8192-f7bc58d0be57" (UID: "814d989f-aaa7-4c73-8192-f7bc58d0be57"). InnerVolumeSpecName "kube-api-access-xn8vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:03 crc kubenswrapper[4713]: I0126 16:00:03.926107 4713 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/814d989f-aaa7-4c73-8192-f7bc58d0be57-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:03 crc kubenswrapper[4713]: I0126 16:00:03.926138 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn8vg\" (UniqueName: \"kubernetes.io/projected/814d989f-aaa7-4c73-8192-f7bc58d0be57-kube-api-access-xn8vg\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:03 crc kubenswrapper[4713]: I0126 16:00:03.926151 4713 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/814d989f-aaa7-4c73-8192-f7bc58d0be57-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:04 crc kubenswrapper[4713]: I0126 16:00:04.222926 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" event={"ID":"814d989f-aaa7-4c73-8192-f7bc58d0be57","Type":"ContainerDied","Data":"4f4c18abaca6b538191a65d39a38981eb64a7b7fcbf6124e8476bde552facfeb"} Jan 26 16:00:04 crc kubenswrapper[4713]: I0126 16:00:04.222975 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f4c18abaca6b538191a65d39a38981eb64a7b7fcbf6124e8476bde552facfeb" Jan 26 16:00:04 crc kubenswrapper[4713]: I0126 16:00:04.223017 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp" Jan 26 16:00:04 crc kubenswrapper[4713]: I0126 16:00:04.317102 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 16:00:04 crc kubenswrapper[4713]: I0126 16:00:04.375396 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 16:00:04 crc kubenswrapper[4713]: I0126 16:00:04.446695 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 16:00:05 crc kubenswrapper[4713]: I0126 16:00:05.249626 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.063556 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-zhp42"] Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.072898 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-zhp42"] Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.157578 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-v6rx9"] Jan 26 16:00:17 crc kubenswrapper[4713]: E0126 16:00:17.158193 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814d989f-aaa7-4c73-8192-f7bc58d0be57" containerName="collect-profiles" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.158221 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="814d989f-aaa7-4c73-8192-f7bc58d0be57" containerName="collect-profiles" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.158511 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="814d989f-aaa7-4c73-8192-f7bc58d0be57" containerName="collect-profiles" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.159465 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.160931 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.167445 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-v6rx9"] Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.230947 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e835b410-79c4-401a-8406-77f0df484466-certs\") pod \"cloudkitty-db-sync-v6rx9\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.231313 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-combined-ca-bundle\") pod \"cloudkitty-db-sync-v6rx9\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.231419 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-config-data\") pod \"cloudkitty-db-sync-v6rx9\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.231524 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25zg8\" (UniqueName: \"kubernetes.io/projected/e835b410-79c4-401a-8406-77f0df484466-kube-api-access-25zg8\") pod \"cloudkitty-db-sync-v6rx9\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.231575 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-scripts\") pod \"cloudkitty-db-sync-v6rx9\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.333301 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-config-data\") pod \"cloudkitty-db-sync-v6rx9\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.333432 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25zg8\" (UniqueName: \"kubernetes.io/projected/e835b410-79c4-401a-8406-77f0df484466-kube-api-access-25zg8\") pod \"cloudkitty-db-sync-v6rx9\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.333502 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-scripts\") pod \"cloudkitty-db-sync-v6rx9\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.334364 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e835b410-79c4-401a-8406-77f0df484466-certs\") pod \"cloudkitty-db-sync-v6rx9\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.334625 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-combined-ca-bundle\") pod \"cloudkitty-db-sync-v6rx9\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.339661 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-scripts\") pod \"cloudkitty-db-sync-v6rx9\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.339900 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-config-data\") pod \"cloudkitty-db-sync-v6rx9\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.340168 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-combined-ca-bundle\") pod \"cloudkitty-db-sync-v6rx9\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.342517 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e835b410-79c4-401a-8406-77f0df484466-certs\") pod \"cloudkitty-db-sync-v6rx9\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.348836 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25zg8\" (UniqueName: \"kubernetes.io/projected/e835b410-79c4-401a-8406-77f0df484466-kube-api-access-25zg8\") pod \"cloudkitty-db-sync-v6rx9\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.480689 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.816176 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c67f072-d970-466d-a3c7-20df7968e5f2" path="/var/lib/kubelet/pods/5c67f072-d970-466d-a3c7-20df7968e5f2/volumes" Jan 26 16:00:17 crc kubenswrapper[4713]: I0126 16:00:17.917948 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-v6rx9"] Jan 26 16:00:18 crc kubenswrapper[4713]: I0126 16:00:18.375248 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-v6rx9" event={"ID":"e835b410-79c4-401a-8406-77f0df484466","Type":"ContainerStarted","Data":"5d91b2fdc5fcce03d320b924b5151956e3474c15f794da536f2c9888e7faa3e1"} Jan 26 16:00:18 crc kubenswrapper[4713]: I0126 16:00:18.871341 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:00:18 crc kubenswrapper[4713]: I0126 16:00:18.871675 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="ceilometer-central-agent" containerID="cri-o://4f858b7028291bca9ce5b7a04671c06065a19314121727d9f41f7a607eabb64e" gracePeriod=30 Jan 26 16:00:18 crc kubenswrapper[4713]: I0126 16:00:18.872724 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="sg-core" containerID="cri-o://11ba4be224f3f2e29aa14cb6b78afc797c34661d36cd8ae77f7afbfc944d4540" gracePeriod=30 Jan 26 16:00:18 crc kubenswrapper[4713]: I0126 16:00:18.872810 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="proxy-httpd" containerID="cri-o://557291bbab58014efc43a15767441aff9008ea46d063566282170d738630a28d" gracePeriod=30 Jan 26 16:00:18 crc kubenswrapper[4713]: I0126 16:00:18.872913 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="ceilometer-notification-agent" containerID="cri-o://e8b6b31a4853c4a29cdb14ed8d6c7e5b9d35ab2c776f8d8a844a013212a42457" gracePeriod=30 Jan 26 16:00:19 crc kubenswrapper[4713]: I0126 16:00:19.404341 4713 generic.go:334] "Generic (PLEG): container finished" podID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerID="11ba4be224f3f2e29aa14cb6b78afc797c34661d36cd8ae77f7afbfc944d4540" exitCode=2 Jan 26 16:00:19 crc kubenswrapper[4713]: I0126 16:00:19.404416 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"815a865f-eacd-4aa0-9c3f-f9bc23f62688","Type":"ContainerDied","Data":"11ba4be224f3f2e29aa14cb6b78afc797c34661d36cd8ae77f7afbfc944d4540"} Jan 26 16:00:20 crc kubenswrapper[4713]: I0126 16:00:20.012189 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:00:20 crc kubenswrapper[4713]: I0126 16:00:20.119409 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:00:20 crc kubenswrapper[4713]: I0126 16:00:20.415280 4713 generic.go:334] "Generic (PLEG): container finished" podID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerID="557291bbab58014efc43a15767441aff9008ea46d063566282170d738630a28d" exitCode=0 Jan 26 16:00:20 crc kubenswrapper[4713]: I0126 16:00:20.415307 4713 generic.go:334] "Generic (PLEG): container finished" podID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerID="4f858b7028291bca9ce5b7a04671c06065a19314121727d9f41f7a607eabb64e" exitCode=0 Jan 26 16:00:20 crc kubenswrapper[4713]: I0126 16:00:20.415328 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"815a865f-eacd-4aa0-9c3f-f9bc23f62688","Type":"ContainerDied","Data":"557291bbab58014efc43a15767441aff9008ea46d063566282170d738630a28d"} Jan 26 16:00:20 crc kubenswrapper[4713]: I0126 16:00:20.415353 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"815a865f-eacd-4aa0-9c3f-f9bc23f62688","Type":"ContainerDied","Data":"4f858b7028291bca9ce5b7a04671c06065a19314121727d9f41f7a607eabb64e"} Jan 26 16:00:24 crc kubenswrapper[4713]: I0126 16:00:24.467927 4713 generic.go:334] "Generic (PLEG): container finished" podID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerID="e8b6b31a4853c4a29cdb14ed8d6c7e5b9d35ab2c776f8d8a844a013212a42457" exitCode=0 Jan 26 16:00:24 crc kubenswrapper[4713]: I0126 16:00:24.467966 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"815a865f-eacd-4aa0-9c3f-f9bc23f62688","Type":"ContainerDied","Data":"e8b6b31a4853c4a29cdb14ed8d6c7e5b9d35ab2c776f8d8a844a013212a42457"} Jan 26 16:00:24 crc kubenswrapper[4713]: I0126 16:00:24.978164 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="7a575e00-cd12-498f-b8a4-0806737389d9" containerName="rabbitmq" containerID="cri-o://6d1371238948ee8ab3018d23537d9f97ab71958842bf22349d195efb6361857b" gracePeriod=604796 Jan 26 16:00:25 crc kubenswrapper[4713]: I0126 16:00:25.290660 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="100b22db-ec0d-40f0-975e-c86349b1890a" containerName="rabbitmq" containerID="cri-o://8861de966625da7bdf9264299d3fe0658ef71f18499ba0c15b3610896a203453" gracePeriod=604795 Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.238469 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.346124 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-combined-ca-bundle\") pod \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.346199 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-sg-core-conf-yaml\") pod \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.346283 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-config-data\") pod \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.346337 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvrvq\" (UniqueName: \"kubernetes.io/projected/815a865f-eacd-4aa0-9c3f-f9bc23f62688-kube-api-access-rvrvq\") pod \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.346467 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-ceilometer-tls-certs\") pod \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.346587 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/815a865f-eacd-4aa0-9c3f-f9bc23f62688-run-httpd\") pod \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.346789 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/815a865f-eacd-4aa0-9c3f-f9bc23f62688-log-httpd\") pod \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.346827 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-scripts\") pod \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\" (UID: \"815a865f-eacd-4aa0-9c3f-f9bc23f62688\") " Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.348743 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815a865f-eacd-4aa0-9c3f-f9bc23f62688-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "815a865f-eacd-4aa0-9c3f-f9bc23f62688" (UID: "815a865f-eacd-4aa0-9c3f-f9bc23f62688"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.349905 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815a865f-eacd-4aa0-9c3f-f9bc23f62688-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "815a865f-eacd-4aa0-9c3f-f9bc23f62688" (UID: "815a865f-eacd-4aa0-9c3f-f9bc23f62688"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.352805 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815a865f-eacd-4aa0-9c3f-f9bc23f62688-kube-api-access-rvrvq" (OuterVolumeSpecName: "kube-api-access-rvrvq") pod "815a865f-eacd-4aa0-9c3f-f9bc23f62688" (UID: "815a865f-eacd-4aa0-9c3f-f9bc23f62688"). InnerVolumeSpecName "kube-api-access-rvrvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.353607 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-scripts" (OuterVolumeSpecName: "scripts") pod "815a865f-eacd-4aa0-9c3f-f9bc23f62688" (UID: "815a865f-eacd-4aa0-9c3f-f9bc23f62688"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.388730 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "815a865f-eacd-4aa0-9c3f-f9bc23f62688" (UID: "815a865f-eacd-4aa0-9c3f-f9bc23f62688"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.413754 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "815a865f-eacd-4aa0-9c3f-f9bc23f62688" (UID: "815a865f-eacd-4aa0-9c3f-f9bc23f62688"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.447407 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "815a865f-eacd-4aa0-9c3f-f9bc23f62688" (UID: "815a865f-eacd-4aa0-9c3f-f9bc23f62688"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.450063 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.450104 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.450120 4713 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.450132 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvrvq\" (UniqueName: \"kubernetes.io/projected/815a865f-eacd-4aa0-9c3f-f9bc23f62688-kube-api-access-rvrvq\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.450144 4713 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.450155 4713 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/815a865f-eacd-4aa0-9c3f-f9bc23f62688-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.450167 4713 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/815a865f-eacd-4aa0-9c3f-f9bc23f62688-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.469101 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-config-data" (OuterVolumeSpecName: "config-data") pod "815a865f-eacd-4aa0-9c3f-f9bc23f62688" (UID: "815a865f-eacd-4aa0-9c3f-f9bc23f62688"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.501694 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"815a865f-eacd-4aa0-9c3f-f9bc23f62688","Type":"ContainerDied","Data":"481cc8641857f66acf4738e446d49adfb8dc7eb47b24955ebba870563ef07e4f"} Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.501748 4713 scope.go:117] "RemoveContainer" containerID="557291bbab58014efc43a15767441aff9008ea46d063566282170d738630a28d" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.501925 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.505495 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-v6rx9" event={"ID":"e835b410-79c4-401a-8406-77f0df484466","Type":"ContainerStarted","Data":"4a7f7a654d58d52e24936131017b47df97177718b63e7ba206480d8c11cfc8ed"} Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.531602 4713 scope.go:117] "RemoveContainer" containerID="11ba4be224f3f2e29aa14cb6b78afc797c34661d36cd8ae77f7afbfc944d4540" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.533211 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-v6rx9" podStartSLOduration=1.542606599 podStartE2EDuration="10.533198947s" podCreationTimestamp="2026-01-26 16:00:17 +0000 UTC" firstStartedPulling="2026-01-26 16:00:17.922977252 +0000 UTC m=+1593.059994487" lastFinishedPulling="2026-01-26 16:00:26.91356961 +0000 UTC m=+1602.050586835" observedRunningTime="2026-01-26 16:00:27.522124566 +0000 UTC m=+1602.659141821" watchObservedRunningTime="2026-01-26 16:00:27.533198947 +0000 UTC m=+1602.670216182" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.551742 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/815a865f-eacd-4aa0-9c3f-f9bc23f62688-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.557961 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.559525 4713 scope.go:117] "RemoveContainer" containerID="e8b6b31a4853c4a29cdb14ed8d6c7e5b9d35ab2c776f8d8a844a013212a42457" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.570829 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.586429 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:00:27 crc kubenswrapper[4713]: E0126 16:00:27.587247 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="ceilometer-notification-agent" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.587272 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="ceilometer-notification-agent" Jan 26 16:00:27 crc kubenswrapper[4713]: E0126 16:00:27.587308 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="sg-core" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.587318 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="sg-core" Jan 26 16:00:27 crc kubenswrapper[4713]: E0126 16:00:27.587336 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="proxy-httpd" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.587344 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="proxy-httpd" Jan 26 16:00:27 crc kubenswrapper[4713]: E0126 16:00:27.587359 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="ceilometer-central-agent" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.587383 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="ceilometer-central-agent" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.587627 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="ceilometer-notification-agent" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.587663 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="sg-core" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.587686 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="ceilometer-central-agent" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.587696 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" containerName="proxy-httpd" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.590120 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.592905 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.593121 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.593955 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.594610 4713 scope.go:117] "RemoveContainer" containerID="4f858b7028291bca9ce5b7a04671c06065a19314121727d9f41f7a607eabb64e" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.599383 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.653275 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-scripts\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.653330 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.653404 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-log-httpd\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.653420 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.653447 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqvpd\" (UniqueName: \"kubernetes.io/projected/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-kube-api-access-sqvpd\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.653507 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-config-data\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.653556 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.653574 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-run-httpd\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.754928 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.754970 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-run-httpd\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.755051 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-scripts\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.755075 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.755125 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-log-httpd\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.755141 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.755166 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqvpd\" (UniqueName: \"kubernetes.io/projected/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-kube-api-access-sqvpd\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.755218 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-config-data\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.755672 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-run-httpd\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.756043 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-log-httpd\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.759404 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.759409 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.759829 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-config-data\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.760391 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.760397 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-scripts\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.773459 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqvpd\" (UniqueName: \"kubernetes.io/projected/a194e4ba-2c4a-4d27-ad03-d8208f85cf13-kube-api-access-sqvpd\") pod \"ceilometer-0\" (UID: \"a194e4ba-2c4a-4d27-ad03-d8208f85cf13\") " pod="openstack/ceilometer-0" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.814856 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815a865f-eacd-4aa0-9c3f-f9bc23f62688" path="/var/lib/kubelet/pods/815a865f-eacd-4aa0-9c3f-f9bc23f62688/volumes" Jan 26 16:00:27 crc kubenswrapper[4713]: I0126 16:00:27.907699 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:00:28 crc kubenswrapper[4713]: I0126 16:00:28.439802 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:00:28 crc kubenswrapper[4713]: I0126 16:00:28.518961 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a194e4ba-2c4a-4d27-ad03-d8208f85cf13","Type":"ContainerStarted","Data":"8d80bc7445dc723340b75429f54a5b9070f04905923d6dc02c443ef119633edc"} Jan 26 16:00:29 crc kubenswrapper[4713]: I0126 16:00:29.538077 4713 generic.go:334] "Generic (PLEG): container finished" podID="e835b410-79c4-401a-8406-77f0df484466" containerID="4a7f7a654d58d52e24936131017b47df97177718b63e7ba206480d8c11cfc8ed" exitCode=0 Jan 26 16:00:29 crc kubenswrapper[4713]: I0126 16:00:29.538161 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-v6rx9" event={"ID":"e835b410-79c4-401a-8406-77f0df484466","Type":"ContainerDied","Data":"4a7f7a654d58d52e24936131017b47df97177718b63e7ba206480d8c11cfc8ed"} Jan 26 16:00:29 crc kubenswrapper[4713]: I0126 16:00:29.952650 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="100b22db-ec0d-40f0-975e-c86349b1890a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 26 16:00:30 crc kubenswrapper[4713]: I0126 16:00:30.263710 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="7a575e00-cd12-498f-b8a4-0806737389d9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.256345 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.260478 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-config-data\") pod \"e835b410-79c4-401a-8406-77f0df484466\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.260571 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-combined-ca-bundle\") pod \"e835b410-79c4-401a-8406-77f0df484466\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.260626 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-scripts\") pod \"e835b410-79c4-401a-8406-77f0df484466\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.260743 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e835b410-79c4-401a-8406-77f0df484466-certs\") pod \"e835b410-79c4-401a-8406-77f0df484466\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.260799 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25zg8\" (UniqueName: \"kubernetes.io/projected/e835b410-79c4-401a-8406-77f0df484466-kube-api-access-25zg8\") pod \"e835b410-79c4-401a-8406-77f0df484466\" (UID: \"e835b410-79c4-401a-8406-77f0df484466\") " Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.264971 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-scripts" (OuterVolumeSpecName: "scripts") pod "e835b410-79c4-401a-8406-77f0df484466" (UID: "e835b410-79c4-401a-8406-77f0df484466"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.269170 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e835b410-79c4-401a-8406-77f0df484466-kube-api-access-25zg8" (OuterVolumeSpecName: "kube-api-access-25zg8") pod "e835b410-79c4-401a-8406-77f0df484466" (UID: "e835b410-79c4-401a-8406-77f0df484466"). InnerVolumeSpecName "kube-api-access-25zg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.282599 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e835b410-79c4-401a-8406-77f0df484466-certs" (OuterVolumeSpecName: "certs") pod "e835b410-79c4-401a-8406-77f0df484466" (UID: "e835b410-79c4-401a-8406-77f0df484466"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.304434 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-config-data" (OuterVolumeSpecName: "config-data") pod "e835b410-79c4-401a-8406-77f0df484466" (UID: "e835b410-79c4-401a-8406-77f0df484466"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.346584 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e835b410-79c4-401a-8406-77f0df484466" (UID: "e835b410-79c4-401a-8406-77f0df484466"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.362932 4713 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e835b410-79c4-401a-8406-77f0df484466-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.362977 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25zg8\" (UniqueName: \"kubernetes.io/projected/e835b410-79c4-401a-8406-77f0df484466-kube-api-access-25zg8\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.362999 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.363016 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.363032 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e835b410-79c4-401a-8406-77f0df484466-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.576287 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-v6rx9" Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.576301 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-v6rx9" event={"ID":"e835b410-79c4-401a-8406-77f0df484466","Type":"ContainerDied","Data":"5d91b2fdc5fcce03d320b924b5151956e3474c15f794da536f2c9888e7faa3e1"} Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.576723 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d91b2fdc5fcce03d320b924b5151956e3474c15f794da536f2c9888e7faa3e1" Jan 26 16:00:32 crc kubenswrapper[4713]: I0126 16:00:32.582553 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a194e4ba-2c4a-4d27-ad03-d8208f85cf13","Type":"ContainerStarted","Data":"8488da40524022c6fc8361170e047659754d90275d9c6eb18f39fd779fc4bcf0"} Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.037784 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.132694 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.179214 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a575e00-cd12-498f-b8a4-0806737389d9-pod-info\") pod \"7a575e00-cd12-498f-b8a4-0806737389d9\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.179382 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-plugins-conf\") pod \"7a575e00-cd12-498f-b8a4-0806737389d9\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.179420 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-tls\") pod \"7a575e00-cd12-498f-b8a4-0806737389d9\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.179462 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-erlang-cookie\") pod \"7a575e00-cd12-498f-b8a4-0806737389d9\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.179483 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-server-conf\") pod \"7a575e00-cd12-498f-b8a4-0806737389d9\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.179540 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a575e00-cd12-498f-b8a4-0806737389d9-erlang-cookie-secret\") pod \"7a575e00-cd12-498f-b8a4-0806737389d9\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.179578 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-confd\") pod \"7a575e00-cd12-498f-b8a4-0806737389d9\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.179615 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-plugins\") pod \"7a575e00-cd12-498f-b8a4-0806737389d9\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.179655 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-config-data\") pod \"7a575e00-cd12-498f-b8a4-0806737389d9\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.179724 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqnzl\" (UniqueName: \"kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-kube-api-access-zqnzl\") pod \"7a575e00-cd12-498f-b8a4-0806737389d9\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.180420 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\") pod \"7a575e00-cd12-498f-b8a4-0806737389d9\" (UID: \"7a575e00-cd12-498f-b8a4-0806737389d9\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.181651 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7a575e00-cd12-498f-b8a4-0806737389d9" (UID: "7a575e00-cd12-498f-b8a4-0806737389d9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.185085 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7a575e00-cd12-498f-b8a4-0806737389d9" (UID: "7a575e00-cd12-498f-b8a4-0806737389d9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.189255 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7a575e00-cd12-498f-b8a4-0806737389d9" (UID: "7a575e00-cd12-498f-b8a4-0806737389d9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.194750 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-kube-api-access-zqnzl" (OuterVolumeSpecName: "kube-api-access-zqnzl") pod "7a575e00-cd12-498f-b8a4-0806737389d9" (UID: "7a575e00-cd12-498f-b8a4-0806737389d9"). InnerVolumeSpecName "kube-api-access-zqnzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.195333 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a575e00-cd12-498f-b8a4-0806737389d9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7a575e00-cd12-498f-b8a4-0806737389d9" (UID: "7a575e00-cd12-498f-b8a4-0806737389d9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.202710 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7a575e00-cd12-498f-b8a4-0806737389d9" (UID: "7a575e00-cd12-498f-b8a4-0806737389d9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.214411 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7a575e00-cd12-498f-b8a4-0806737389d9-pod-info" (OuterVolumeSpecName: "pod-info") pod "7a575e00-cd12-498f-b8a4-0806737389d9" (UID: "7a575e00-cd12-498f-b8a4-0806737389d9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.234917 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-config-data" (OuterVolumeSpecName: "config-data") pod "7a575e00-cd12-498f-b8a4-0806737389d9" (UID: "7a575e00-cd12-498f-b8a4-0806737389d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.272163 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b12b91c9-58db-444c-88cc-aa786fca9e97" (OuterVolumeSpecName: "persistence") pod "7a575e00-cd12-498f-b8a4-0806737389d9" (UID: "7a575e00-cd12-498f-b8a4-0806737389d9"). InnerVolumeSpecName "pvc-b12b91c9-58db-444c-88cc-aa786fca9e97". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.283225 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r6jt\" (UniqueName: \"kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-kube-api-access-8r6jt\") pod \"100b22db-ec0d-40f0-975e-c86349b1890a\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.283313 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-server-conf\") pod \"100b22db-ec0d-40f0-975e-c86349b1890a\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.283348 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/100b22db-ec0d-40f0-975e-c86349b1890a-erlang-cookie-secret\") pod \"100b22db-ec0d-40f0-975e-c86349b1890a\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.283387 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/100b22db-ec0d-40f0-975e-c86349b1890a-pod-info\") pod \"100b22db-ec0d-40f0-975e-c86349b1890a\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.284441 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\") pod \"100b22db-ec0d-40f0-975e-c86349b1890a\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.284601 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-plugins-conf\") pod \"100b22db-ec0d-40f0-975e-c86349b1890a\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.284648 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-tls\") pod \"100b22db-ec0d-40f0-975e-c86349b1890a\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.284672 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-plugins\") pod \"100b22db-ec0d-40f0-975e-c86349b1890a\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.284697 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-confd\") pod \"100b22db-ec0d-40f0-975e-c86349b1890a\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.284716 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-erlang-cookie\") pod \"100b22db-ec0d-40f0-975e-c86349b1890a\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.284829 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-config-data\") pod \"100b22db-ec0d-40f0-975e-c86349b1890a\" (UID: \"100b22db-ec0d-40f0-975e-c86349b1890a\") " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.285335 4713 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.285352 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.285378 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqnzl\" (UniqueName: \"kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-kube-api-access-zqnzl\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.285402 4713 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\") on node \"crc\" " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.285413 4713 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a575e00-cd12-498f-b8a4-0806737389d9-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.285425 4713 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.285436 4713 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.285447 4713 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.285455 4713 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a575e00-cd12-498f-b8a4-0806737389d9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.288737 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "100b22db-ec0d-40f0-975e-c86349b1890a" (UID: "100b22db-ec0d-40f0-975e-c86349b1890a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.296102 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "100b22db-ec0d-40f0-975e-c86349b1890a" (UID: "100b22db-ec0d-40f0-975e-c86349b1890a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.298385 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "100b22db-ec0d-40f0-975e-c86349b1890a" (UID: "100b22db-ec0d-40f0-975e-c86349b1890a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.300417 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-server-conf" (OuterVolumeSpecName: "server-conf") pod "7a575e00-cd12-498f-b8a4-0806737389d9" (UID: "7a575e00-cd12-498f-b8a4-0806737389d9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.305852 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.305923 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.305984 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.308070 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39"} pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.317655 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" containerID="cri-o://d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" gracePeriod=600 Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.327590 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "100b22db-ec0d-40f0-975e-c86349b1890a" (UID: "100b22db-ec0d-40f0-975e-c86349b1890a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.327651 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-kube-api-access-8r6jt" (OuterVolumeSpecName: "kube-api-access-8r6jt") pod "100b22db-ec0d-40f0-975e-c86349b1890a" (UID: "100b22db-ec0d-40f0-975e-c86349b1890a"). InnerVolumeSpecName "kube-api-access-8r6jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.332435 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100b22db-ec0d-40f0-975e-c86349b1890a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "100b22db-ec0d-40f0-975e-c86349b1890a" (UID: "100b22db-ec0d-40f0-975e-c86349b1890a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.364239 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/100b22db-ec0d-40f0-975e-c86349b1890a-pod-info" (OuterVolumeSpecName: "pod-info") pod "100b22db-ec0d-40f0-975e-c86349b1890a" (UID: "100b22db-ec0d-40f0-975e-c86349b1890a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.367317 4713 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.367668 4713 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b12b91c9-58db-444c-88cc-aa786fca9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b12b91c9-58db-444c-88cc-aa786fca9e97") on node "crc" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.371987 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52" (OuterVolumeSpecName: "persistence") pod "100b22db-ec0d-40f0-975e-c86349b1890a" (UID: "100b22db-ec0d-40f0-975e-c86349b1890a"). InnerVolumeSpecName "pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.387249 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r6jt\" (UniqueName: \"kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-kube-api-access-8r6jt\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.387286 4713 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/100b22db-ec0d-40f0-975e-c86349b1890a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.387296 4713 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/100b22db-ec0d-40f0-975e-c86349b1890a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.387305 4713 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a575e00-cd12-498f-b8a4-0806737389d9-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.387333 4713 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\") on node \"crc\" " Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.387344 4713 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.387353 4713 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.387373 4713 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.387385 4713 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.387397 4713 reconciler_common.go:293] "Volume detached for volume \"pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.416254 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-config-data" (OuterVolumeSpecName: "config-data") pod "100b22db-ec0d-40f0-975e-c86349b1890a" (UID: "100b22db-ec0d-40f0-975e-c86349b1890a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.463102 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-server-conf" (OuterVolumeSpecName: "server-conf") pod "100b22db-ec0d-40f0-975e-c86349b1890a" (UID: "100b22db-ec0d-40f0-975e-c86349b1890a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.481457 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-lm4cs"] Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.490537 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-lm4cs"] Jan 26 16:00:33 crc kubenswrapper[4713]: E0126 16:00:33.493057 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.493159 4713 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.493306 4713 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52") on node "crc" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.496332 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.496382 4713 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/100b22db-ec0d-40f0-975e-c86349b1890a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.496393 4713 reconciler_common.go:293] "Volume detached for volume \"pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.542027 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7a575e00-cd12-498f-b8a4-0806737389d9" (UID: "7a575e00-cd12-498f-b8a4-0806737389d9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.574459 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-4k84d"] Jan 26 16:00:33 crc kubenswrapper[4713]: E0126 16:00:33.574841 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a575e00-cd12-498f-b8a4-0806737389d9" containerName="setup-container" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.574857 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a575e00-cd12-498f-b8a4-0806737389d9" containerName="setup-container" Jan 26 16:00:33 crc kubenswrapper[4713]: E0126 16:00:33.574884 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a575e00-cd12-498f-b8a4-0806737389d9" containerName="rabbitmq" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.574890 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a575e00-cd12-498f-b8a4-0806737389d9" containerName="rabbitmq" Jan 26 16:00:33 crc kubenswrapper[4713]: E0126 16:00:33.574902 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100b22db-ec0d-40f0-975e-c86349b1890a" containerName="setup-container" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.574910 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="100b22db-ec0d-40f0-975e-c86349b1890a" containerName="setup-container" Jan 26 16:00:33 crc kubenswrapper[4713]: E0126 16:00:33.574931 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e835b410-79c4-401a-8406-77f0df484466" containerName="cloudkitty-db-sync" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.574937 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="e835b410-79c4-401a-8406-77f0df484466" containerName="cloudkitty-db-sync" Jan 26 16:00:33 crc kubenswrapper[4713]: E0126 16:00:33.574948 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100b22db-ec0d-40f0-975e-c86349b1890a" containerName="rabbitmq" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.574954 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="100b22db-ec0d-40f0-975e-c86349b1890a" containerName="rabbitmq" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.575122 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a575e00-cd12-498f-b8a4-0806737389d9" containerName="rabbitmq" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.575139 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="e835b410-79c4-401a-8406-77f0df484466" containerName="cloudkitty-db-sync" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.575154 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="100b22db-ec0d-40f0-975e-c86349b1890a" containerName="rabbitmq" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.575809 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.584144 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.599049 4713 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a575e00-cd12-498f-b8a4-0806737389d9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.603706 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-4k84d"] Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.604004 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "100b22db-ec0d-40f0-975e-c86349b1890a" (UID: "100b22db-ec0d-40f0-975e-c86349b1890a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.607335 4713 generic.go:334] "Generic (PLEG): container finished" podID="7a575e00-cd12-498f-b8a4-0806737389d9" containerID="6d1371238948ee8ab3018d23537d9f97ab71958842bf22349d195efb6361857b" exitCode=0 Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.607519 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7a575e00-cd12-498f-b8a4-0806737389d9","Type":"ContainerDied","Data":"6d1371238948ee8ab3018d23537d9f97ab71958842bf22349d195efb6361857b"} Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.607678 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7a575e00-cd12-498f-b8a4-0806737389d9","Type":"ContainerDied","Data":"5f06368974061b373afa009965d61709f6d87a65f871e63d054e25a60e82240d"} Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.607869 4713 scope.go:117] "RemoveContainer" containerID="6d1371238948ee8ab3018d23537d9f97ab71958842bf22349d195efb6361857b" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.608119 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.617763 4713 generic.go:334] "Generic (PLEG): container finished" podID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" exitCode=0 Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.617823 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerDied","Data":"d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39"} Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.618485 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:00:33 crc kubenswrapper[4713]: E0126 16:00:33.623928 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.628534 4713 generic.go:334] "Generic (PLEG): container finished" podID="100b22db-ec0d-40f0-975e-c86349b1890a" containerID="8861de966625da7bdf9264299d3fe0658ef71f18499ba0c15b3610896a203453" exitCode=0 Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.628567 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"100b22db-ec0d-40f0-975e-c86349b1890a","Type":"ContainerDied","Data":"8861de966625da7bdf9264299d3fe0658ef71f18499ba0c15b3610896a203453"} Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.628589 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"100b22db-ec0d-40f0-975e-c86349b1890a","Type":"ContainerDied","Data":"382b983ae6a117f226b3d61bab487bc29048380342cba284172f6aa1fbcd11a0"} Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.628650 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.651909 4713 scope.go:117] "RemoveContainer" containerID="644dfaca6ea3ca3209dd65a9c882b713b3434866352e57957bae0b279e83000f" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.680829 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.710241 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-scripts\") pod \"cloudkitty-storageinit-4k84d\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.710604 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-combined-ca-bundle\") pod \"cloudkitty-storageinit-4k84d\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.710835 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-config-data\") pod \"cloudkitty-storageinit-4k84d\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.710961 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7dzw\" (UniqueName: \"kubernetes.io/projected/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-kube-api-access-t7dzw\") pod \"cloudkitty-storageinit-4k84d\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.711084 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-certs\") pod \"cloudkitty-storageinit-4k84d\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.711270 4713 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/100b22db-ec0d-40f0-975e-c86349b1890a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.755835 4713 scope.go:117] "RemoveContainer" containerID="6d1371238948ee8ab3018d23537d9f97ab71958842bf22349d195efb6361857b" Jan 26 16:00:33 crc kubenswrapper[4713]: E0126 16:00:33.757194 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d1371238948ee8ab3018d23537d9f97ab71958842bf22349d195efb6361857b\": container with ID starting with 6d1371238948ee8ab3018d23537d9f97ab71958842bf22349d195efb6361857b not found: ID does not exist" containerID="6d1371238948ee8ab3018d23537d9f97ab71958842bf22349d195efb6361857b" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.757226 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d1371238948ee8ab3018d23537d9f97ab71958842bf22349d195efb6361857b"} err="failed to get container status \"6d1371238948ee8ab3018d23537d9f97ab71958842bf22349d195efb6361857b\": rpc error: code = NotFound desc = could not find container \"6d1371238948ee8ab3018d23537d9f97ab71958842bf22349d195efb6361857b\": container with ID starting with 6d1371238948ee8ab3018d23537d9f97ab71958842bf22349d195efb6361857b not found: ID does not exist" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.757247 4713 scope.go:117] "RemoveContainer" containerID="644dfaca6ea3ca3209dd65a9c882b713b3434866352e57957bae0b279e83000f" Jan 26 16:00:33 crc kubenswrapper[4713]: E0126 16:00:33.764892 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"644dfaca6ea3ca3209dd65a9c882b713b3434866352e57957bae0b279e83000f\": container with ID starting with 644dfaca6ea3ca3209dd65a9c882b713b3434866352e57957bae0b279e83000f not found: ID does not exist" containerID="644dfaca6ea3ca3209dd65a9c882b713b3434866352e57957bae0b279e83000f" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.764925 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"644dfaca6ea3ca3209dd65a9c882b713b3434866352e57957bae0b279e83000f"} err="failed to get container status \"644dfaca6ea3ca3209dd65a9c882b713b3434866352e57957bae0b279e83000f\": rpc error: code = NotFound desc = could not find container \"644dfaca6ea3ca3209dd65a9c882b713b3434866352e57957bae0b279e83000f\": container with ID starting with 644dfaca6ea3ca3209dd65a9c882b713b3434866352e57957bae0b279e83000f not found: ID does not exist" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.764953 4713 scope.go:117] "RemoveContainer" containerID="42ffb45851c67f85ba43b543b337fa54564e1c75cb03fd91b387c5b7e98ba8b2" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.815651 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7dzw\" (UniqueName: \"kubernetes.io/projected/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-kube-api-access-t7dzw\") pod \"cloudkitty-storageinit-4k84d\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.818856 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-certs\") pod \"cloudkitty-storageinit-4k84d\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.829480 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-scripts\") pod \"cloudkitty-storageinit-4k84d\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.831286 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-combined-ca-bundle\") pod \"cloudkitty-storageinit-4k84d\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.840540 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-config-data\") pod \"cloudkitty-storageinit-4k84d\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.829097 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-certs\") pod \"cloudkitty-storageinit-4k84d\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.852068 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-scripts\") pod \"cloudkitty-storageinit-4k84d\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.868842 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-config-data\") pod \"cloudkitty-storageinit-4k84d\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.876203 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-combined-ca-bundle\") pod \"cloudkitty-storageinit-4k84d\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.878697 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7dzw\" (UniqueName: \"kubernetes.io/projected/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-kube-api-access-t7dzw\") pod \"cloudkitty-storageinit-4k84d\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.909796 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.988373 4713 scope.go:117] "RemoveContainer" containerID="8861de966625da7bdf9264299d3fe0658ef71f18499ba0c15b3610896a203453" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.990855 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="889cf7db-25b0-4afa-8daa-351dbd2dffe8" path="/var/lib/kubelet/pods/889cf7db-25b0-4afa-8daa-351dbd2dffe8/volumes" Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.991418 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:00:33 crc kubenswrapper[4713]: I0126 16:00:33.991444 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.005956 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.007780 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.012828 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.013043 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-c7g8f" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.013257 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.013379 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.013486 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.013577 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.013673 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.017619 4713 scope.go:117] "RemoveContainer" containerID="d9bc4cf0deeff3133fa6a3db72d690d889d4e333a291b97dc393485761a1f512" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.032084 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.049777 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.056435 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.058686 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.060981 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.062687 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.062744 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-65tnt" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.062828 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.062921 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.062988 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.063076 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.068983 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.095744 4713 scope.go:117] "RemoveContainer" containerID="8861de966625da7bdf9264299d3fe0658ef71f18499ba0c15b3610896a203453" Jan 26 16:00:34 crc kubenswrapper[4713]: E0126 16:00:34.098904 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8861de966625da7bdf9264299d3fe0658ef71f18499ba0c15b3610896a203453\": container with ID starting with 8861de966625da7bdf9264299d3fe0658ef71f18499ba0c15b3610896a203453 not found: ID does not exist" containerID="8861de966625da7bdf9264299d3fe0658ef71f18499ba0c15b3610896a203453" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.099014 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8861de966625da7bdf9264299d3fe0658ef71f18499ba0c15b3610896a203453"} err="failed to get container status \"8861de966625da7bdf9264299d3fe0658ef71f18499ba0c15b3610896a203453\": rpc error: code = NotFound desc = could not find container \"8861de966625da7bdf9264299d3fe0658ef71f18499ba0c15b3610896a203453\": container with ID starting with 8861de966625da7bdf9264299d3fe0658ef71f18499ba0c15b3610896a203453 not found: ID does not exist" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.099093 4713 scope.go:117] "RemoveContainer" containerID="d9bc4cf0deeff3133fa6a3db72d690d889d4e333a291b97dc393485761a1f512" Jan 26 16:00:34 crc kubenswrapper[4713]: E0126 16:00:34.099550 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9bc4cf0deeff3133fa6a3db72d690d889d4e333a291b97dc393485761a1f512\": container with ID starting with d9bc4cf0deeff3133fa6a3db72d690d889d4e333a291b97dc393485761a1f512 not found: ID does not exist" containerID="d9bc4cf0deeff3133fa6a3db72d690d889d4e333a291b97dc393485761a1f512" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.099587 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9bc4cf0deeff3133fa6a3db72d690d889d4e333a291b97dc393485761a1f512"} err="failed to get container status \"d9bc4cf0deeff3133fa6a3db72d690d889d4e333a291b97dc393485761a1f512\": rpc error: code = NotFound desc = could not find container \"d9bc4cf0deeff3133fa6a3db72d690d889d4e333a291b97dc393485761a1f512\": container with ID starting with d9bc4cf0deeff3133fa6a3db72d690d889d4e333a291b97dc393485761a1f512 not found: ID does not exist" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.182609 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.182653 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.182673 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/43b98a31-5771-411a-b08d-1c3f17c50a4d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.182698 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85g9b\" (UniqueName: \"kubernetes.io/projected/43b98a31-5771-411a-b08d-1c3f17c50a4d-kube-api-access-85g9b\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.182733 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.182749 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.182767 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.182825 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/43b98a31-5771-411a-b08d-1c3f17c50a4d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.182863 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/43b98a31-5771-411a-b08d-1c3f17c50a4d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.182881 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.182901 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/43b98a31-5771-411a-b08d-1c3f17c50a4d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.182969 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/43b98a31-5771-411a-b08d-1c3f17c50a4d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.183006 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/43b98a31-5771-411a-b08d-1c3f17c50a4d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.183023 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.183043 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbx8h\" (UniqueName: \"kubernetes.io/projected/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-kube-api-access-xbx8h\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.183069 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/43b98a31-5771-411a-b08d-1c3f17c50a4d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.183086 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.183105 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/43b98a31-5771-411a-b08d-1c3f17c50a4d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.183126 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.183206 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-config-data\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.183226 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.183241 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/43b98a31-5771-411a-b08d-1c3f17c50a4d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285141 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285180 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285204 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285253 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/43b98a31-5771-411a-b08d-1c3f17c50a4d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285303 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/43b98a31-5771-411a-b08d-1c3f17c50a4d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285325 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285345 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/43b98a31-5771-411a-b08d-1c3f17c50a4d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285394 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/43b98a31-5771-411a-b08d-1c3f17c50a4d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285431 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/43b98a31-5771-411a-b08d-1c3f17c50a4d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285452 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285479 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbx8h\" (UniqueName: \"kubernetes.io/projected/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-kube-api-access-xbx8h\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285511 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/43b98a31-5771-411a-b08d-1c3f17c50a4d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285539 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285562 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/43b98a31-5771-411a-b08d-1c3f17c50a4d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285591 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285624 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285649 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-config-data\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285677 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/43b98a31-5771-411a-b08d-1c3f17c50a4d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285735 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285760 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285783 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/43b98a31-5771-411a-b08d-1c3f17c50a4d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.285806 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85g9b\" (UniqueName: \"kubernetes.io/projected/43b98a31-5771-411a-b08d-1c3f17c50a4d-kube-api-access-85g9b\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.286594 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/43b98a31-5771-411a-b08d-1c3f17c50a4d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.288060 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.288273 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/43b98a31-5771-411a-b08d-1c3f17c50a4d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.288520 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/43b98a31-5771-411a-b08d-1c3f17c50a4d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.290700 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.290751 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/43b98a31-5771-411a-b08d-1c3f17c50a4d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.291440 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.291619 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/43b98a31-5771-411a-b08d-1c3f17c50a4d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.292474 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.293170 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.294129 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-config-data\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.297885 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.298528 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/43b98a31-5771-411a-b08d-1c3f17c50a4d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.299809 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.300134 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.301610 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/43b98a31-5771-411a-b08d-1c3f17c50a4d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.304653 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/43b98a31-5771-411a-b08d-1c3f17c50a4d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.305735 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/43b98a31-5771-411a-b08d-1c3f17c50a4d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.308237 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.308328 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/12a97f487943d57b18987a444e059a363b72befbbd881b9c31da3513a8331d3d/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.308765 4713 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.308824 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3a0a23a7e437b41ba232f4b8f97a57cdc4bd553de75aff653652d00d1601e57d/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.309214 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85g9b\" (UniqueName: \"kubernetes.io/projected/43b98a31-5771-411a-b08d-1c3f17c50a4d-kube-api-access-85g9b\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.326400 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbx8h\" (UniqueName: \"kubernetes.io/projected/36f2aa2e-c567-4d86-b3d6-c3572a45ccd1-kube-api-access-xbx8h\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.382989 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b12b91c9-58db-444c-88cc-aa786fca9e97\") pod \"rabbitmq-cell1-server-0\" (UID: \"43b98a31-5771-411a-b08d-1c3f17c50a4d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.405678 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db5e5f0d-0776-4457-9aee-c3bc6cf9ec52\") pod \"rabbitmq-server-0\" (UID: \"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1\") " pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.503816 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-4k84d"] Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.651755 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.660496 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a194e4ba-2c4a-4d27-ad03-d8208f85cf13","Type":"ContainerStarted","Data":"8ba992e9aa433ce9d6bcea94a07fc33d9d8079b5dc152a5536c79f5a4f62c231"} Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.660541 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a194e4ba-2c4a-4d27-ad03-d8208f85cf13","Type":"ContainerStarted","Data":"a2809a6959eaee95426479862a8e2fd15b0ac9319d3ff3fdd36080edff4b1623"} Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.690039 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-4k84d" event={"ID":"854b9c7b-7ba2-4909-8a82-3f927c3b28c0","Type":"ContainerStarted","Data":"02dddb0ffcb8cbb64cd7cf6aff2e894405cae35d5c9ef5aa50f3075e9ebe9e9a"} Jan 26 16:00:34 crc kubenswrapper[4713]: I0126 16:00:34.695567 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 16:00:34 crc kubenswrapper[4713]: E0126 16:00:34.838244 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod854b9c7b_7ba2_4909_8a82_3f927c3b28c0.slice/crio-ca8603d341a3dead0aaaa6f87ad81604357fe5b89f4eff5d20a92b2239894b1f.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.034700 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-5c65w"] Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.037699 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.049083 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.062720 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-5c65w"] Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.204949 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.205012 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-config\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.205051 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.205510 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz6fw\" (UniqueName: \"kubernetes.io/projected/582e597a-f9be-429c-8a24-0a0dc19a9274-kube-api-access-lz6fw\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.205697 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.205777 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.205858 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.233764 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.258638 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.327347 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-config\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.333000 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.333319 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz6fw\" (UniqueName: \"kubernetes.io/projected/582e597a-f9be-429c-8a24-0a0dc19a9274-kube-api-access-lz6fw\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.339239 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.339464 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.339692 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.339822 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.340495 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.334447 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-config\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.334973 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.342114 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.388037 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz6fw\" (UniqueName: \"kubernetes.io/projected/582e597a-f9be-429c-8a24-0a0dc19a9274-kube-api-access-lz6fw\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.393587 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.393905 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-5c65w\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.662610 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.716901 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"43b98a31-5771-411a-b08d-1c3f17c50a4d","Type":"ContainerStarted","Data":"ae83a04761e985ad588e251f9457b712cef21ea686e42a0ca0c40907cf4f861d"} Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.719373 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-4k84d" event={"ID":"854b9c7b-7ba2-4909-8a82-3f927c3b28c0","Type":"ContainerStarted","Data":"ca8603d341a3dead0aaaa6f87ad81604357fe5b89f4eff5d20a92b2239894b1f"} Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.721775 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1","Type":"ContainerStarted","Data":"236534b6cfe92795bcd8212964d84142bbdcf2b467735cc038436544218fb72f"} Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.757637 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-4k84d" podStartSLOduration=2.757617642 podStartE2EDuration="2.757617642s" podCreationTimestamp="2026-01-26 16:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:35.741046707 +0000 UTC m=+1610.878063942" watchObservedRunningTime="2026-01-26 16:00:35.757617642 +0000 UTC m=+1610.894634878" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.851977 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="100b22db-ec0d-40f0-975e-c86349b1890a" path="/var/lib/kubelet/pods/100b22db-ec0d-40f0-975e-c86349b1890a/volumes" Jan 26 16:00:35 crc kubenswrapper[4713]: I0126 16:00:35.852726 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a575e00-cd12-498f-b8a4-0806737389d9" path="/var/lib/kubelet/pods/7a575e00-cd12-498f-b8a4-0806737389d9/volumes" Jan 26 16:00:36 crc kubenswrapper[4713]: I0126 16:00:36.216685 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-5c65w"] Jan 26 16:00:36 crc kubenswrapper[4713]: W0126 16:00:36.255579 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod582e597a_f9be_429c_8a24_0a0dc19a9274.slice/crio-1ba87a737e900bb0e4be19761910c7a800dd569250882d42562c65a0d57b68d3 WatchSource:0}: Error finding container 1ba87a737e900bb0e4be19761910c7a800dd569250882d42562c65a0d57b68d3: Status 404 returned error can't find the container with id 1ba87a737e900bb0e4be19761910c7a800dd569250882d42562c65a0d57b68d3 Jan 26 16:00:36 crc kubenswrapper[4713]: I0126 16:00:36.732564 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" event={"ID":"582e597a-f9be-429c-8a24-0a0dc19a9274","Type":"ContainerStarted","Data":"1ba87a737e900bb0e4be19761910c7a800dd569250882d42562c65a0d57b68d3"} Jan 26 16:00:36 crc kubenswrapper[4713]: I0126 16:00:36.736539 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a194e4ba-2c4a-4d27-ad03-d8208f85cf13","Type":"ContainerStarted","Data":"89f42a4fafba75e329cf22d8409559100e3d3f4f912ff583d7ad4cd25e48048d"} Jan 26 16:00:36 crc kubenswrapper[4713]: I0126 16:00:36.758062 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.22275151 podStartE2EDuration="9.758039106s" podCreationTimestamp="2026-01-26 16:00:27 +0000 UTC" firstStartedPulling="2026-01-26 16:00:28.448547382 +0000 UTC m=+1603.585564607" lastFinishedPulling="2026-01-26 16:00:35.983834968 +0000 UTC m=+1611.120852203" observedRunningTime="2026-01-26 16:00:36.754475956 +0000 UTC m=+1611.891493191" watchObservedRunningTime="2026-01-26 16:00:36.758039106 +0000 UTC m=+1611.895056341" Jan 26 16:00:37 crc kubenswrapper[4713]: I0126 16:00:37.748772 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1","Type":"ContainerStarted","Data":"e132586e3ce252e75dac97babe5d505eb17e32b9d424d709d351ff119a8a4618"} Jan 26 16:00:37 crc kubenswrapper[4713]: I0126 16:00:37.751137 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"43b98a31-5771-411a-b08d-1c3f17c50a4d","Type":"ContainerStarted","Data":"d439fb1ca311e64b11d204702520e845843c634c860617feaa8b23061aef8323"} Jan 26 16:00:37 crc kubenswrapper[4713]: I0126 16:00:37.753284 4713 generic.go:334] "Generic (PLEG): container finished" podID="582e597a-f9be-429c-8a24-0a0dc19a9274" containerID="633b2a7e8e84ac5d1f7b63f963993fe076abd6a9a57482e5a8c4e94c82320c13" exitCode=0 Jan 26 16:00:37 crc kubenswrapper[4713]: I0126 16:00:37.754489 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" event={"ID":"582e597a-f9be-429c-8a24-0a0dc19a9274","Type":"ContainerDied","Data":"633b2a7e8e84ac5d1f7b63f963993fe076abd6a9a57482e5a8c4e94c82320c13"} Jan 26 16:00:37 crc kubenswrapper[4713]: I0126 16:00:37.755710 4713 generic.go:334] "Generic (PLEG): container finished" podID="854b9c7b-7ba2-4909-8a82-3f927c3b28c0" containerID="ca8603d341a3dead0aaaa6f87ad81604357fe5b89f4eff5d20a92b2239894b1f" exitCode=0 Jan 26 16:00:37 crc kubenswrapper[4713]: I0126 16:00:37.755776 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-4k84d" event={"ID":"854b9c7b-7ba2-4909-8a82-3f927c3b28c0","Type":"ContainerDied","Data":"ca8603d341a3dead0aaaa6f87ad81604357fe5b89f4eff5d20a92b2239894b1f"} Jan 26 16:00:37 crc kubenswrapper[4713]: I0126 16:00:37.756030 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:00:38 crc kubenswrapper[4713]: I0126 16:00:38.767422 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" event={"ID":"582e597a-f9be-429c-8a24-0a0dc19a9274","Type":"ContainerStarted","Data":"ac8c1346645601a68477e248ef0670e588886da87e96668d56a644955ec93acc"} Jan 26 16:00:38 crc kubenswrapper[4713]: I0126 16:00:38.811341 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" podStartSLOduration=3.811321429 podStartE2EDuration="3.811321429s" podCreationTimestamp="2026-01-26 16:00:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:38.800460654 +0000 UTC m=+1613.937477889" watchObservedRunningTime="2026-01-26 16:00:38.811321429 +0000 UTC m=+1613.948338664" Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.313859 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.502345 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-combined-ca-bundle\") pod \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.502437 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-scripts\") pod \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.502645 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7dzw\" (UniqueName: \"kubernetes.io/projected/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-kube-api-access-t7dzw\") pod \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.502678 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-certs\") pod \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.502778 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-config-data\") pod \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\" (UID: \"854b9c7b-7ba2-4909-8a82-3f927c3b28c0\") " Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.508655 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-kube-api-access-t7dzw" (OuterVolumeSpecName: "kube-api-access-t7dzw") pod "854b9c7b-7ba2-4909-8a82-3f927c3b28c0" (UID: "854b9c7b-7ba2-4909-8a82-3f927c3b28c0"). InnerVolumeSpecName "kube-api-access-t7dzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.508715 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-certs" (OuterVolumeSpecName: "certs") pod "854b9c7b-7ba2-4909-8a82-3f927c3b28c0" (UID: "854b9c7b-7ba2-4909-8a82-3f927c3b28c0"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.509544 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-scripts" (OuterVolumeSpecName: "scripts") pod "854b9c7b-7ba2-4909-8a82-3f927c3b28c0" (UID: "854b9c7b-7ba2-4909-8a82-3f927c3b28c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.546500 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "854b9c7b-7ba2-4909-8a82-3f927c3b28c0" (UID: "854b9c7b-7ba2-4909-8a82-3f927c3b28c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.589982 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-config-data" (OuterVolumeSpecName: "config-data") pod "854b9c7b-7ba2-4909-8a82-3f927c3b28c0" (UID: "854b9c7b-7ba2-4909-8a82-3f927c3b28c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.605096 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.605141 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.605155 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.605166 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7dzw\" (UniqueName: \"kubernetes.io/projected/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-kube-api-access-t7dzw\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.605176 4713 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/854b9c7b-7ba2-4909-8a82-3f927c3b28c0-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.779386 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-4k84d" event={"ID":"854b9c7b-7ba2-4909-8a82-3f927c3b28c0","Type":"ContainerDied","Data":"02dddb0ffcb8cbb64cd7cf6aff2e894405cae35d5c9ef5aa50f3075e9ebe9e9a"} Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.779441 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02dddb0ffcb8cbb64cd7cf6aff2e894405cae35d5c9ef5aa50f3075e9ebe9e9a" Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.779448 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-4k84d" Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.779617 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.952986 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.953555 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="e64b34b6-9839-4ef8-83fb-7bb963c865aa" containerName="cloudkitty-proc" containerID="cri-o://dbc1f5d6023a0912a139e284ac46f7f930ca7fbe2a257dc097ee9198291a9e19" gracePeriod=30 Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.966321 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.970742 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="e2d47268-3c4f-48cf-a362-b81aa7265dd4" containerName="cloudkitty-api-log" containerID="cri-o://7d1470d8ef812734500f8bca27439c2a46045fec8e804e56347f9c96dacd4de1" gracePeriod=30 Jan 26 16:00:39 crc kubenswrapper[4713]: I0126 16:00:39.971190 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="e2d47268-3c4f-48cf-a362-b81aa7265dd4" containerName="cloudkitty-api" containerID="cri-o://c6332888dbc937ce6b34d27bac5925e52c393b01886bc5a07096059b6466dafb" gracePeriod=30 Jan 26 16:00:40 crc kubenswrapper[4713]: I0126 16:00:40.795580 4713 generic.go:334] "Generic (PLEG): container finished" podID="e2d47268-3c4f-48cf-a362-b81aa7265dd4" containerID="7d1470d8ef812734500f8bca27439c2a46045fec8e804e56347f9c96dacd4de1" exitCode=143 Jan 26 16:00:40 crc kubenswrapper[4713]: I0126 16:00:40.795971 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"e2d47268-3c4f-48cf-a362-b81aa7265dd4","Type":"ContainerDied","Data":"7d1470d8ef812734500f8bca27439c2a46045fec8e804e56347f9c96dacd4de1"} Jan 26 16:00:40 crc kubenswrapper[4713]: I0126 16:00:40.811308 4713 generic.go:334] "Generic (PLEG): container finished" podID="e64b34b6-9839-4ef8-83fb-7bb963c865aa" containerID="dbc1f5d6023a0912a139e284ac46f7f930ca7fbe2a257dc097ee9198291a9e19" exitCode=0 Jan 26 16:00:40 crc kubenswrapper[4713]: I0126 16:00:40.812494 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"e64b34b6-9839-4ef8-83fb-7bb963c865aa","Type":"ContainerDied","Data":"dbc1f5d6023a0912a139e284ac46f7f930ca7fbe2a257dc097ee9198291a9e19"} Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.107229 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.244082 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-combined-ca-bundle\") pod \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.244197 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhhs6\" (UniqueName: \"kubernetes.io/projected/e64b34b6-9839-4ef8-83fb-7bb963c865aa-kube-api-access-bhhs6\") pod \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.244222 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e64b34b6-9839-4ef8-83fb-7bb963c865aa-certs\") pod \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.244304 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-scripts\") pod \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.244386 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-config-data-custom\") pod \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.244420 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-config-data\") pod \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\" (UID: \"e64b34b6-9839-4ef8-83fb-7bb963c865aa\") " Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.254946 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e64b34b6-9839-4ef8-83fb-7bb963c865aa-certs" (OuterVolumeSpecName: "certs") pod "e64b34b6-9839-4ef8-83fb-7bb963c865aa" (UID: "e64b34b6-9839-4ef8-83fb-7bb963c865aa"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.261192 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-scripts" (OuterVolumeSpecName: "scripts") pod "e64b34b6-9839-4ef8-83fb-7bb963c865aa" (UID: "e64b34b6-9839-4ef8-83fb-7bb963c865aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.261333 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e64b34b6-9839-4ef8-83fb-7bb963c865aa-kube-api-access-bhhs6" (OuterVolumeSpecName: "kube-api-access-bhhs6") pod "e64b34b6-9839-4ef8-83fb-7bb963c865aa" (UID: "e64b34b6-9839-4ef8-83fb-7bb963c865aa"). InnerVolumeSpecName "kube-api-access-bhhs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.275661 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e64b34b6-9839-4ef8-83fb-7bb963c865aa" (UID: "e64b34b6-9839-4ef8-83fb-7bb963c865aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.296568 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e64b34b6-9839-4ef8-83fb-7bb963c865aa" (UID: "e64b34b6-9839-4ef8-83fb-7bb963c865aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.311081 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-config-data" (OuterVolumeSpecName: "config-data") pod "e64b34b6-9839-4ef8-83fb-7bb963c865aa" (UID: "e64b34b6-9839-4ef8-83fb-7bb963c865aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.350803 4713 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.350837 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.350847 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.350855 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhhs6\" (UniqueName: \"kubernetes.io/projected/e64b34b6-9839-4ef8-83fb-7bb963c865aa-kube-api-access-bhhs6\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.350864 4713 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e64b34b6-9839-4ef8-83fb-7bb963c865aa-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.350873 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e64b34b6-9839-4ef8-83fb-7bb963c865aa-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.421125 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.556499 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-config-data-custom\") pod \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.556759 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d47268-3c4f-48cf-a362-b81aa7265dd4-logs\") pod \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.556777 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-scripts\") pod \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.556814 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpgcf\" (UniqueName: \"kubernetes.io/projected/e2d47268-3c4f-48cf-a362-b81aa7265dd4-kube-api-access-wpgcf\") pod \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.556836 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-public-tls-certs\") pod \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.556902 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-config-data\") pod \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.556979 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-combined-ca-bundle\") pod \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.557048 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e2d47268-3c4f-48cf-a362-b81aa7265dd4-certs\") pod \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.557114 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-internal-tls-certs\") pod \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\" (UID: \"e2d47268-3c4f-48cf-a362-b81aa7265dd4\") " Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.564607 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d47268-3c4f-48cf-a362-b81aa7265dd4-kube-api-access-wpgcf" (OuterVolumeSpecName: "kube-api-access-wpgcf") pod "e2d47268-3c4f-48cf-a362-b81aa7265dd4" (UID: "e2d47268-3c4f-48cf-a362-b81aa7265dd4"). InnerVolumeSpecName "kube-api-access-wpgcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.568746 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2d47268-3c4f-48cf-a362-b81aa7265dd4-logs" (OuterVolumeSpecName: "logs") pod "e2d47268-3c4f-48cf-a362-b81aa7265dd4" (UID: "e2d47268-3c4f-48cf-a362-b81aa7265dd4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.569237 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e2d47268-3c4f-48cf-a362-b81aa7265dd4" (UID: "e2d47268-3c4f-48cf-a362-b81aa7265dd4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.572130 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-scripts" (OuterVolumeSpecName: "scripts") pod "e2d47268-3c4f-48cf-a362-b81aa7265dd4" (UID: "e2d47268-3c4f-48cf-a362-b81aa7265dd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.583505 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d47268-3c4f-48cf-a362-b81aa7265dd4-certs" (OuterVolumeSpecName: "certs") pod "e2d47268-3c4f-48cf-a362-b81aa7265dd4" (UID: "e2d47268-3c4f-48cf-a362-b81aa7265dd4"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.628246 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2d47268-3c4f-48cf-a362-b81aa7265dd4" (UID: "e2d47268-3c4f-48cf-a362-b81aa7265dd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.629238 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-config-data" (OuterVolumeSpecName: "config-data") pod "e2d47268-3c4f-48cf-a362-b81aa7265dd4" (UID: "e2d47268-3c4f-48cf-a362-b81aa7265dd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.658820 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e2d47268-3c4f-48cf-a362-b81aa7265dd4" (UID: "e2d47268-3c4f-48cf-a362-b81aa7265dd4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.661428 4713 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.661456 4713 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.661483 4713 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d47268-3c4f-48cf-a362-b81aa7265dd4-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.661494 4713 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.661501 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpgcf\" (UniqueName: \"kubernetes.io/projected/e2d47268-3c4f-48cf-a362-b81aa7265dd4-kube-api-access-wpgcf\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.661512 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.661520 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.661527 4713 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e2d47268-3c4f-48cf-a362-b81aa7265dd4-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.676446 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e2d47268-3c4f-48cf-a362-b81aa7265dd4" (UID: "e2d47268-3c4f-48cf-a362-b81aa7265dd4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.763291 4713 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2d47268-3c4f-48cf-a362-b81aa7265dd4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.822816 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.823444 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"e64b34b6-9839-4ef8-83fb-7bb963c865aa","Type":"ContainerDied","Data":"2c27a14630b97652c4d153e73a464d65429ed5e2fc0acd22639e22c94afb7fe5"} Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.823489 4713 scope.go:117] "RemoveContainer" containerID="dbc1f5d6023a0912a139e284ac46f7f930ca7fbe2a257dc097ee9198291a9e19" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.836564 4713 generic.go:334] "Generic (PLEG): container finished" podID="e2d47268-3c4f-48cf-a362-b81aa7265dd4" containerID="c6332888dbc937ce6b34d27bac5925e52c393b01886bc5a07096059b6466dafb" exitCode=0 Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.836691 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"e2d47268-3c4f-48cf-a362-b81aa7265dd4","Type":"ContainerDied","Data":"c6332888dbc937ce6b34d27bac5925e52c393b01886bc5a07096059b6466dafb"} Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.836727 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"e2d47268-3c4f-48cf-a362-b81aa7265dd4","Type":"ContainerDied","Data":"110055bd0f24be8661c1183c7a9c7fa9aa0432ab0e8ea1a13a1260ce59c01c1e"} Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.837085 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.874527 4713 scope.go:117] "RemoveContainer" containerID="c6332888dbc937ce6b34d27bac5925e52c393b01886bc5a07096059b6466dafb" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.927184 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.929114 4713 scope.go:117] "RemoveContainer" containerID="7d1470d8ef812734500f8bca27439c2a46045fec8e804e56347f9c96dacd4de1" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.942565 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.954187 4713 scope.go:117] "RemoveContainer" containerID="c6332888dbc937ce6b34d27bac5925e52c393b01886bc5a07096059b6466dafb" Jan 26 16:00:41 crc kubenswrapper[4713]: E0126 16:00:41.954673 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6332888dbc937ce6b34d27bac5925e52c393b01886bc5a07096059b6466dafb\": container with ID starting with c6332888dbc937ce6b34d27bac5925e52c393b01886bc5a07096059b6466dafb not found: ID does not exist" containerID="c6332888dbc937ce6b34d27bac5925e52c393b01886bc5a07096059b6466dafb" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.954705 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6332888dbc937ce6b34d27bac5925e52c393b01886bc5a07096059b6466dafb"} err="failed to get container status \"c6332888dbc937ce6b34d27bac5925e52c393b01886bc5a07096059b6466dafb\": rpc error: code = NotFound desc = could not find container \"c6332888dbc937ce6b34d27bac5925e52c393b01886bc5a07096059b6466dafb\": container with ID starting with c6332888dbc937ce6b34d27bac5925e52c393b01886bc5a07096059b6466dafb not found: ID does not exist" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.954727 4713 scope.go:117] "RemoveContainer" containerID="7d1470d8ef812734500f8bca27439c2a46045fec8e804e56347f9c96dacd4de1" Jan 26 16:00:41 crc kubenswrapper[4713]: E0126 16:00:41.954898 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d1470d8ef812734500f8bca27439c2a46045fec8e804e56347f9c96dacd4de1\": container with ID starting with 7d1470d8ef812734500f8bca27439c2a46045fec8e804e56347f9c96dacd4de1 not found: ID does not exist" containerID="7d1470d8ef812734500f8bca27439c2a46045fec8e804e56347f9c96dacd4de1" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.954915 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d1470d8ef812734500f8bca27439c2a46045fec8e804e56347f9c96dacd4de1"} err="failed to get container status \"7d1470d8ef812734500f8bca27439c2a46045fec8e804e56347f9c96dacd4de1\": rpc error: code = NotFound desc = could not find container \"7d1470d8ef812734500f8bca27439c2a46045fec8e804e56347f9c96dacd4de1\": container with ID starting with 7d1470d8ef812734500f8bca27439c2a46045fec8e804e56347f9c96dacd4de1 not found: ID does not exist" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.956565 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.967913 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.981743 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 26 16:00:41 crc kubenswrapper[4713]: E0126 16:00:41.982283 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64b34b6-9839-4ef8-83fb-7bb963c865aa" containerName="cloudkitty-proc" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.982310 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64b34b6-9839-4ef8-83fb-7bb963c865aa" containerName="cloudkitty-proc" Jan 26 16:00:41 crc kubenswrapper[4713]: E0126 16:00:41.982333 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="854b9c7b-7ba2-4909-8a82-3f927c3b28c0" containerName="cloudkitty-storageinit" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.982344 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="854b9c7b-7ba2-4909-8a82-3f927c3b28c0" containerName="cloudkitty-storageinit" Jan 26 16:00:41 crc kubenswrapper[4713]: E0126 16:00:41.982394 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d47268-3c4f-48cf-a362-b81aa7265dd4" containerName="cloudkitty-api" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.982403 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d47268-3c4f-48cf-a362-b81aa7265dd4" containerName="cloudkitty-api" Jan 26 16:00:41 crc kubenswrapper[4713]: E0126 16:00:41.982439 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d47268-3c4f-48cf-a362-b81aa7265dd4" containerName="cloudkitty-api-log" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.982448 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d47268-3c4f-48cf-a362-b81aa7265dd4" containerName="cloudkitty-api-log" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.982696 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d47268-3c4f-48cf-a362-b81aa7265dd4" containerName="cloudkitty-api-log" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.982731 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="e64b34b6-9839-4ef8-83fb-7bb963c865aa" containerName="cloudkitty-proc" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.982764 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="854b9c7b-7ba2-4909-8a82-3f927c3b28c0" containerName="cloudkitty-storageinit" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.982776 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d47268-3c4f-48cf-a362-b81aa7265dd4" containerName="cloudkitty-api" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.983807 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.985963 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.986264 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.986290 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.986267 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-kbfj7" Jan 26 16:00:41 crc kubenswrapper[4713]: I0126 16:00:41.986498 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.002941 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.013238 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.014847 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.017212 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.017265 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.017394 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.033561 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.085815 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/202dfc25-10dd-4c42-9c53-ccc3220a140b-certs\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.085935 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/202dfc25-10dd-4c42-9c53-ccc3220a140b-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.085968 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202dfc25-10dd-4c42-9c53-ccc3220a140b-config-data\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.086037 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202dfc25-10dd-4c42-9c53-ccc3220a140b-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.086063 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202dfc25-10dd-4c42-9c53-ccc3220a140b-scripts\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.086097 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx9zq\" (UniqueName: \"kubernetes.io/projected/202dfc25-10dd-4c42-9c53-ccc3220a140b-kube-api-access-mx9zq\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.187588 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-logs\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.187645 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-config-data\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.187674 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.187695 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.187757 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/202dfc25-10dd-4c42-9c53-ccc3220a140b-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.187784 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thzd4\" (UniqueName: \"kubernetes.io/projected/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-kube-api-access-thzd4\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.187812 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202dfc25-10dd-4c42-9c53-ccc3220a140b-config-data\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.187881 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202dfc25-10dd-4c42-9c53-ccc3220a140b-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.187908 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202dfc25-10dd-4c42-9c53-ccc3220a140b-scripts\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.187945 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx9zq\" (UniqueName: \"kubernetes.io/projected/202dfc25-10dd-4c42-9c53-ccc3220a140b-kube-api-access-mx9zq\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.187982 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-certs\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.188014 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.188071 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.188096 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-scripts\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.188126 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/202dfc25-10dd-4c42-9c53-ccc3220a140b-certs\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.192948 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/202dfc25-10dd-4c42-9c53-ccc3220a140b-certs\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.192951 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202dfc25-10dd-4c42-9c53-ccc3220a140b-scripts\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.194675 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202dfc25-10dd-4c42-9c53-ccc3220a140b-config-data\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.195919 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/202dfc25-10dd-4c42-9c53-ccc3220a140b-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.196495 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202dfc25-10dd-4c42-9c53-ccc3220a140b-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.210529 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx9zq\" (UniqueName: \"kubernetes.io/projected/202dfc25-10dd-4c42-9c53-ccc3220a140b-kube-api-access-mx9zq\") pod \"cloudkitty-proc-0\" (UID: \"202dfc25-10dd-4c42-9c53-ccc3220a140b\") " pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.290186 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-certs\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.290239 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.290290 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.290310 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-scripts\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.290355 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-logs\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.290392 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.290405 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-config-data\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.290419 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.290461 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thzd4\" (UniqueName: \"kubernetes.io/projected/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-kube-api-access-thzd4\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.291045 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-logs\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.293768 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.294103 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-scripts\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.299191 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-certs\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.301663 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.302050 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.303037 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-config-data\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.304781 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.308603 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thzd4\" (UniqueName: \"kubernetes.io/projected/d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0-kube-api-access-thzd4\") pod \"cloudkitty-api-0\" (UID: \"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0\") " pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.308682 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.335690 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.867058 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 26 16:00:42 crc kubenswrapper[4713]: W0126 16:00:42.879595 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd03fb00d_d7ae_4f79_95b7_b1a8b717e2a0.slice/crio-9ecd7c1dd5d97ae7749a80dead57a3ccdfd1e4a02dd8365d02f7b8002a408d29 WatchSource:0}: Error finding container 9ecd7c1dd5d97ae7749a80dead57a3ccdfd1e4a02dd8365d02f7b8002a408d29: Status 404 returned error can't find the container with id 9ecd7c1dd5d97ae7749a80dead57a3ccdfd1e4a02dd8365d02f7b8002a408d29 Jan 26 16:00:42 crc kubenswrapper[4713]: I0126 16:00:42.882537 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 26 16:00:43 crc kubenswrapper[4713]: I0126 16:00:43.821729 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d47268-3c4f-48cf-a362-b81aa7265dd4" path="/var/lib/kubelet/pods/e2d47268-3c4f-48cf-a362-b81aa7265dd4/volumes" Jan 26 16:00:43 crc kubenswrapper[4713]: I0126 16:00:43.823701 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e64b34b6-9839-4ef8-83fb-7bb963c865aa" path="/var/lib/kubelet/pods/e64b34b6-9839-4ef8-83fb-7bb963c865aa/volumes" Jan 26 16:00:43 crc kubenswrapper[4713]: I0126 16:00:43.864348 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"202dfc25-10dd-4c42-9c53-ccc3220a140b","Type":"ContainerStarted","Data":"bdbacc32162450785b5060711d4b2c9030bca937c680713208bd58f4ea1c4960"} Jan 26 16:00:43 crc kubenswrapper[4713]: I0126 16:00:43.868223 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0","Type":"ContainerStarted","Data":"df0716e81f32a1b4e9e236c43080ba7d9ffa52134be68f4c406797e874bd92ad"} Jan 26 16:00:43 crc kubenswrapper[4713]: I0126 16:00:43.868263 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0","Type":"ContainerStarted","Data":"793116e5eccbc1d18045a827659087c65060e4432cca02c9c6533f8d261a465a"} Jan 26 16:00:43 crc kubenswrapper[4713]: I0126 16:00:43.868282 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0","Type":"ContainerStarted","Data":"9ecd7c1dd5d97ae7749a80dead57a3ccdfd1e4a02dd8365d02f7b8002a408d29"} Jan 26 16:00:43 crc kubenswrapper[4713]: I0126 16:00:43.868374 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Jan 26 16:00:43 crc kubenswrapper[4713]: I0126 16:00:43.891601 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.891576796 podStartE2EDuration="2.891576796s" podCreationTimestamp="2026-01-26 16:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:43.885646548 +0000 UTC m=+1619.022663783" watchObservedRunningTime="2026-01-26 16:00:43.891576796 +0000 UTC m=+1619.028594031" Jan 26 16:00:44 crc kubenswrapper[4713]: I0126 16:00:44.881785 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"202dfc25-10dd-4c42-9c53-ccc3220a140b","Type":"ContainerStarted","Data":"fb6821ce2d0f2e2d34c619465509dce82a192b0299f3289a7160af6d819fe8d2"} Jan 26 16:00:44 crc kubenswrapper[4713]: I0126 16:00:44.914286 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.944679848 podStartE2EDuration="3.914263118s" podCreationTimestamp="2026-01-26 16:00:41 +0000 UTC" firstStartedPulling="2026-01-26 16:00:42.869146744 +0000 UTC m=+1618.006163979" lastFinishedPulling="2026-01-26 16:00:43.838730014 +0000 UTC m=+1618.975747249" observedRunningTime="2026-01-26 16:00:44.905722965 +0000 UTC m=+1620.042740210" watchObservedRunningTime="2026-01-26 16:00:44.914263118 +0000 UTC m=+1620.051280363" Jan 26 16:00:45 crc kubenswrapper[4713]: I0126 16:00:45.663570 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:45 crc kubenswrapper[4713]: I0126 16:00:45.768547 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-4k4wf"] Jan 26 16:00:45 crc kubenswrapper[4713]: I0126 16:00:45.768796 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" podUID="c56be499-f359-4178-a9b2-df69f97d684f" containerName="dnsmasq-dns" containerID="cri-o://6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537" gracePeriod=10 Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:45.873004 4713 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" podUID="c56be499-f359-4178-a9b2-df69f97d684f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.225:5353: connect: connection refused" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.346160 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-d4j8b"] Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.353223 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-d4j8b"] Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.353420 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.426912 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m6nq\" (UniqueName: \"kubernetes.io/projected/6c10b80b-7a08-427b-ac13-29beceb2efd3-kube-api-access-9m6nq\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.426987 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-dns-svc\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.427002 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.427046 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.427091 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.427111 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.427156 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-config\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.533494 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.533524 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.533574 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-config\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.533618 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m6nq\" (UniqueName: \"kubernetes.io/projected/6c10b80b-7a08-427b-ac13-29beceb2efd3-kube-api-access-9m6nq\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.533667 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.533685 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-dns-svc\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.533728 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.534510 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.538147 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.538181 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.541927 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-config\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.541990 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.542124 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c10b80b-7a08-427b-ac13-29beceb2efd3-dns-svc\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.569175 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m6nq\" (UniqueName: \"kubernetes.io/projected/6c10b80b-7a08-427b-ac13-29beceb2efd3-kube-api-access-9m6nq\") pod \"dnsmasq-dns-85f64749dc-d4j8b\" (UID: \"6c10b80b-7a08-427b-ac13-29beceb2efd3\") " pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.790917 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.950219 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.951264 4713 generic.go:334] "Generic (PLEG): container finished" podID="c56be499-f359-4178-a9b2-df69f97d684f" containerID="6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537" exitCode=0 Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.951293 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" event={"ID":"c56be499-f359-4178-a9b2-df69f97d684f","Type":"ContainerDied","Data":"6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537"} Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.951314 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" event={"ID":"c56be499-f359-4178-a9b2-df69f97d684f","Type":"ContainerDied","Data":"e958173d66866986ad2124ee00fa4401bb7104f089b1ba5f0e0d03ac985d2f07"} Jan 26 16:00:46 crc kubenswrapper[4713]: I0126 16:00:46.951332 4713 scope.go:117] "RemoveContainer" containerID="6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.006179 4713 scope.go:117] "RemoveContainer" containerID="b81f3a20015a5c31a8cb137931e65d3e54e13e1e070290dd60ed4abdb77c55ba" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.075764 4713 scope.go:117] "RemoveContainer" containerID="6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537" Jan 26 16:00:47 crc kubenswrapper[4713]: E0126 16:00:47.076137 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537\": container with ID starting with 6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537 not found: ID does not exist" containerID="6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.076161 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537"} err="failed to get container status \"6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537\": rpc error: code = NotFound desc = could not find container \"6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537\": container with ID starting with 6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537 not found: ID does not exist" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.076181 4713 scope.go:117] "RemoveContainer" containerID="b81f3a20015a5c31a8cb137931e65d3e54e13e1e070290dd60ed4abdb77c55ba" Jan 26 16:00:47 crc kubenswrapper[4713]: E0126 16:00:47.080515 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b81f3a20015a5c31a8cb137931e65d3e54e13e1e070290dd60ed4abdb77c55ba\": container with ID starting with b81f3a20015a5c31a8cb137931e65d3e54e13e1e070290dd60ed4abdb77c55ba not found: ID does not exist" containerID="b81f3a20015a5c31a8cb137931e65d3e54e13e1e070290dd60ed4abdb77c55ba" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.080579 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81f3a20015a5c31a8cb137931e65d3e54e13e1e070290dd60ed4abdb77c55ba"} err="failed to get container status \"b81f3a20015a5c31a8cb137931e65d3e54e13e1e070290dd60ed4abdb77c55ba\": rpc error: code = NotFound desc = could not find container \"b81f3a20015a5c31a8cb137931e65d3e54e13e1e070290dd60ed4abdb77c55ba\": container with ID starting with b81f3a20015a5c31a8cb137931e65d3e54e13e1e070290dd60ed4abdb77c55ba not found: ID does not exist" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.088242 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-dns-swift-storage-0\") pod \"c56be499-f359-4178-a9b2-df69f97d684f\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.088444 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-ovsdbserver-nb\") pod \"c56be499-f359-4178-a9b2-df69f97d684f\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.088515 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8wtx\" (UniqueName: \"kubernetes.io/projected/c56be499-f359-4178-a9b2-df69f97d684f-kube-api-access-g8wtx\") pod \"c56be499-f359-4178-a9b2-df69f97d684f\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.088576 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-ovsdbserver-sb\") pod \"c56be499-f359-4178-a9b2-df69f97d684f\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.088611 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-config\") pod \"c56be499-f359-4178-a9b2-df69f97d684f\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.088679 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-dns-svc\") pod \"c56be499-f359-4178-a9b2-df69f97d684f\" (UID: \"c56be499-f359-4178-a9b2-df69f97d684f\") " Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.118830 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c56be499-f359-4178-a9b2-df69f97d684f-kube-api-access-g8wtx" (OuterVolumeSpecName: "kube-api-access-g8wtx") pod "c56be499-f359-4178-a9b2-df69f97d684f" (UID: "c56be499-f359-4178-a9b2-df69f97d684f"). InnerVolumeSpecName "kube-api-access-g8wtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.163302 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c56be499-f359-4178-a9b2-df69f97d684f" (UID: "c56be499-f359-4178-a9b2-df69f97d684f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.189893 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c56be499-f359-4178-a9b2-df69f97d684f" (UID: "c56be499-f359-4178-a9b2-df69f97d684f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.190212 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c56be499-f359-4178-a9b2-df69f97d684f" (UID: "c56be499-f359-4178-a9b2-df69f97d684f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.191240 4713 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.191258 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.191267 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8wtx\" (UniqueName: \"kubernetes.io/projected/c56be499-f359-4178-a9b2-df69f97d684f-kube-api-access-g8wtx\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.191277 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.194126 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c56be499-f359-4178-a9b2-df69f97d684f" (UID: "c56be499-f359-4178-a9b2-df69f97d684f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.198724 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-config" (OuterVolumeSpecName: "config") pod "c56be499-f359-4178-a9b2-df69f97d684f" (UID: "c56be499-f359-4178-a9b2-df69f97d684f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.294387 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.294649 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c56be499-f359-4178-a9b2-df69f97d684f-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.432485 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-d4j8b"] Jan 26 16:00:47 crc kubenswrapper[4713]: W0126 16:00:47.435806 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c10b80b_7a08_427b_ac13_29beceb2efd3.slice/crio-35564d97dbf1f1fe4b6f3645018f48a1a643a2694c4b2291ed99817a82ab7976 WatchSource:0}: Error finding container 35564d97dbf1f1fe4b6f3645018f48a1a643a2694c4b2291ed99817a82ab7976: Status 404 returned error can't find the container with id 35564d97dbf1f1fe4b6f3645018f48a1a643a2694c4b2291ed99817a82ab7976 Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.805863 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:00:47 crc kubenswrapper[4713]: E0126 16:00:47.806083 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.964965 4713 generic.go:334] "Generic (PLEG): container finished" podID="6c10b80b-7a08-427b-ac13-29beceb2efd3" containerID="9bd80970616dd4a4a943cf511bc6950f08d42d5401f200bab2e5554372ca33e8" exitCode=0 Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.965166 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" event={"ID":"6c10b80b-7a08-427b-ac13-29beceb2efd3","Type":"ContainerDied","Data":"9bd80970616dd4a4a943cf511bc6950f08d42d5401f200bab2e5554372ca33e8"} Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.965385 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" event={"ID":"6c10b80b-7a08-427b-ac13-29beceb2efd3","Type":"ContainerStarted","Data":"35564d97dbf1f1fe4b6f3645018f48a1a643a2694c4b2291ed99817a82ab7976"} Jan 26 16:00:47 crc kubenswrapper[4713]: I0126 16:00:47.969560 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-4k4wf" Jan 26 16:00:48 crc kubenswrapper[4713]: I0126 16:00:48.020051 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-4k4wf"] Jan 26 16:00:48 crc kubenswrapper[4713]: I0126 16:00:48.029066 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-4k4wf"] Jan 26 16:00:48 crc kubenswrapper[4713]: I0126 16:00:48.981107 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" event={"ID":"6c10b80b-7a08-427b-ac13-29beceb2efd3","Type":"ContainerStarted","Data":"e7b54943662e135ecea24bb3f7d2401f18b2e0994d3032f8d2c075ed5fb4c616"} Jan 26 16:00:48 crc kubenswrapper[4713]: I0126 16:00:48.981590 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.013217 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" podStartSLOduration=4.013187187 podStartE2EDuration="4.013187187s" podCreationTimestamp="2026-01-26 16:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:49.009955615 +0000 UTC m=+1624.146972890" watchObservedRunningTime="2026-01-26 16:00:49.013187187 +0000 UTC m=+1624.150204462" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.709925 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jn8sz"] Jan 26 16:00:49 crc kubenswrapper[4713]: E0126 16:00:49.710430 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56be499-f359-4178-a9b2-df69f97d684f" containerName="init" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.710447 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56be499-f359-4178-a9b2-df69f97d684f" containerName="init" Jan 26 16:00:49 crc kubenswrapper[4713]: E0126 16:00:49.710466 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56be499-f359-4178-a9b2-df69f97d684f" containerName="dnsmasq-dns" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.710472 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56be499-f359-4178-a9b2-df69f97d684f" containerName="dnsmasq-dns" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.710700 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56be499-f359-4178-a9b2-df69f97d684f" containerName="dnsmasq-dns" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.712296 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.718568 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jn8sz"] Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.817004 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c56be499-f359-4178-a9b2-df69f97d684f" path="/var/lib/kubelet/pods/c56be499-f359-4178-a9b2-df69f97d684f/volumes" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.849333 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb5qv\" (UniqueName: \"kubernetes.io/projected/c3d37bb7-45d1-48a9-937c-7affb875a7ca-kube-api-access-nb5qv\") pod \"community-operators-jn8sz\" (UID: \"c3d37bb7-45d1-48a9-937c-7affb875a7ca\") " pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.849441 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d37bb7-45d1-48a9-937c-7affb875a7ca-catalog-content\") pod \"community-operators-jn8sz\" (UID: \"c3d37bb7-45d1-48a9-937c-7affb875a7ca\") " pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.849707 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d37bb7-45d1-48a9-937c-7affb875a7ca-utilities\") pod \"community-operators-jn8sz\" (UID: \"c3d37bb7-45d1-48a9-937c-7affb875a7ca\") " pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.951656 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d37bb7-45d1-48a9-937c-7affb875a7ca-catalog-content\") pod \"community-operators-jn8sz\" (UID: \"c3d37bb7-45d1-48a9-937c-7affb875a7ca\") " pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.951843 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d37bb7-45d1-48a9-937c-7affb875a7ca-utilities\") pod \"community-operators-jn8sz\" (UID: \"c3d37bb7-45d1-48a9-937c-7affb875a7ca\") " pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.951950 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb5qv\" (UniqueName: \"kubernetes.io/projected/c3d37bb7-45d1-48a9-937c-7affb875a7ca-kube-api-access-nb5qv\") pod \"community-operators-jn8sz\" (UID: \"c3d37bb7-45d1-48a9-937c-7affb875a7ca\") " pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.952284 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d37bb7-45d1-48a9-937c-7affb875a7ca-catalog-content\") pod \"community-operators-jn8sz\" (UID: \"c3d37bb7-45d1-48a9-937c-7affb875a7ca\") " pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.952302 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d37bb7-45d1-48a9-937c-7affb875a7ca-utilities\") pod \"community-operators-jn8sz\" (UID: \"c3d37bb7-45d1-48a9-937c-7affb875a7ca\") " pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:00:49 crc kubenswrapper[4713]: I0126 16:00:49.970849 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb5qv\" (UniqueName: \"kubernetes.io/projected/c3d37bb7-45d1-48a9-937c-7affb875a7ca-kube-api-access-nb5qv\") pod \"community-operators-jn8sz\" (UID: \"c3d37bb7-45d1-48a9-937c-7affb875a7ca\") " pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:00:50 crc kubenswrapper[4713]: I0126 16:00:50.041382 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:00:50 crc kubenswrapper[4713]: I0126 16:00:50.613818 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jn8sz"] Jan 26 16:00:51 crc kubenswrapper[4713]: I0126 16:00:51.001196 4713 generic.go:334] "Generic (PLEG): container finished" podID="c3d37bb7-45d1-48a9-937c-7affb875a7ca" containerID="c7126d2fa1106110684d9c857c1871bb4c3d9d0dc418f5cf997e503928832750" exitCode=0 Jan 26 16:00:51 crc kubenswrapper[4713]: I0126 16:00:51.001287 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn8sz" event={"ID":"c3d37bb7-45d1-48a9-937c-7affb875a7ca","Type":"ContainerDied","Data":"c7126d2fa1106110684d9c857c1871bb4c3d9d0dc418f5cf997e503928832750"} Jan 26 16:00:51 crc kubenswrapper[4713]: I0126 16:00:51.001507 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn8sz" event={"ID":"c3d37bb7-45d1-48a9-937c-7affb875a7ca","Type":"ContainerStarted","Data":"6738ce5293afb9c8f86b2db154112686cd795722b8b8791fdaf6204e181a4d1a"} Jan 26 16:00:54 crc kubenswrapper[4713]: I0126 16:00:54.062032 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn8sz" event={"ID":"c3d37bb7-45d1-48a9-937c-7affb875a7ca","Type":"ContainerStarted","Data":"731dc77fedf4d0ab9cd0260870cba2fe5bf18ab8def95ac6d30256d89373e87a"} Jan 26 16:00:55 crc kubenswrapper[4713]: I0126 16:00:55.085825 4713 generic.go:334] "Generic (PLEG): container finished" podID="c3d37bb7-45d1-48a9-937c-7affb875a7ca" containerID="731dc77fedf4d0ab9cd0260870cba2fe5bf18ab8def95ac6d30256d89373e87a" exitCode=0 Jan 26 16:00:55 crc kubenswrapper[4713]: I0126 16:00:55.086096 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn8sz" event={"ID":"c3d37bb7-45d1-48a9-937c-7affb875a7ca","Type":"ContainerDied","Data":"731dc77fedf4d0ab9cd0260870cba2fe5bf18ab8def95ac6d30256d89373e87a"} Jan 26 16:00:55 crc kubenswrapper[4713]: E0126 16:00:55.359788 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc56be499_f359_4178_a9b2_df69f97d684f.slice/crio-conmon-6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:00:56 crc kubenswrapper[4713]: I0126 16:00:56.100682 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn8sz" event={"ID":"c3d37bb7-45d1-48a9-937c-7affb875a7ca","Type":"ContainerStarted","Data":"41b88fc3cd70915cf3b93984288e3b27af96df6cb731504e88e12c4be1b6c798"} Jan 26 16:00:56 crc kubenswrapper[4713]: I0126 16:00:56.793717 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85f64749dc-d4j8b" Jan 26 16:00:56 crc kubenswrapper[4713]: I0126 16:00:56.827833 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jn8sz" podStartSLOduration=3.246968974 podStartE2EDuration="7.827802487s" podCreationTimestamp="2026-01-26 16:00:49 +0000 UTC" firstStartedPulling="2026-01-26 16:00:51.002706742 +0000 UTC m=+1626.139723987" lastFinishedPulling="2026-01-26 16:00:55.583540255 +0000 UTC m=+1630.720557500" observedRunningTime="2026-01-26 16:00:56.130490378 +0000 UTC m=+1631.267507623" watchObservedRunningTime="2026-01-26 16:00:56.827802487 +0000 UTC m=+1631.964819732" Jan 26 16:00:56 crc kubenswrapper[4713]: I0126 16:00:56.880244 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-5c65w"] Jan 26 16:00:56 crc kubenswrapper[4713]: I0126 16:00:56.880798 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" podUID="582e597a-f9be-429c-8a24-0a0dc19a9274" containerName="dnsmasq-dns" containerID="cri-o://ac8c1346645601a68477e248ef0670e588886da87e96668d56a644955ec93acc" gracePeriod=10 Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.112216 4713 generic.go:334] "Generic (PLEG): container finished" podID="582e597a-f9be-429c-8a24-0a0dc19a9274" containerID="ac8c1346645601a68477e248ef0670e588886da87e96668d56a644955ec93acc" exitCode=0 Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.113068 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" event={"ID":"582e597a-f9be-429c-8a24-0a0dc19a9274","Type":"ContainerDied","Data":"ac8c1346645601a68477e248ef0670e588886da87e96668d56a644955ec93acc"} Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.464771 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.637299 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-ovsdbserver-nb\") pod \"582e597a-f9be-429c-8a24-0a0dc19a9274\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.637399 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-config\") pod \"582e597a-f9be-429c-8a24-0a0dc19a9274\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.637435 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-openstack-edpm-ipam\") pod \"582e597a-f9be-429c-8a24-0a0dc19a9274\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.637465 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-ovsdbserver-sb\") pod \"582e597a-f9be-429c-8a24-0a0dc19a9274\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.637480 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-dns-svc\") pod \"582e597a-f9be-429c-8a24-0a0dc19a9274\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.637506 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz6fw\" (UniqueName: \"kubernetes.io/projected/582e597a-f9be-429c-8a24-0a0dc19a9274-kube-api-access-lz6fw\") pod \"582e597a-f9be-429c-8a24-0a0dc19a9274\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.637594 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-dns-swift-storage-0\") pod \"582e597a-f9be-429c-8a24-0a0dc19a9274\" (UID: \"582e597a-f9be-429c-8a24-0a0dc19a9274\") " Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.642978 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582e597a-f9be-429c-8a24-0a0dc19a9274-kube-api-access-lz6fw" (OuterVolumeSpecName: "kube-api-access-lz6fw") pod "582e597a-f9be-429c-8a24-0a0dc19a9274" (UID: "582e597a-f9be-429c-8a24-0a0dc19a9274"). InnerVolumeSpecName "kube-api-access-lz6fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.708878 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "582e597a-f9be-429c-8a24-0a0dc19a9274" (UID: "582e597a-f9be-429c-8a24-0a0dc19a9274"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.719712 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "582e597a-f9be-429c-8a24-0a0dc19a9274" (UID: "582e597a-f9be-429c-8a24-0a0dc19a9274"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.729460 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "582e597a-f9be-429c-8a24-0a0dc19a9274" (UID: "582e597a-f9be-429c-8a24-0a0dc19a9274"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.729475 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "582e597a-f9be-429c-8a24-0a0dc19a9274" (UID: "582e597a-f9be-429c-8a24-0a0dc19a9274"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.729767 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "582e597a-f9be-429c-8a24-0a0dc19a9274" (UID: "582e597a-f9be-429c-8a24-0a0dc19a9274"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.730004 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-config" (OuterVolumeSpecName: "config") pod "582e597a-f9be-429c-8a24-0a0dc19a9274" (UID: "582e597a-f9be-429c-8a24-0a0dc19a9274"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.741768 4713 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.741803 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.741815 4713 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.741824 4713 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.741834 4713 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.741843 4713 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/582e597a-f9be-429c-8a24-0a0dc19a9274-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.741854 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz6fw\" (UniqueName: \"kubernetes.io/projected/582e597a-f9be-429c-8a24-0a0dc19a9274-kube-api-access-lz6fw\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:57 crc kubenswrapper[4713]: I0126 16:00:57.922410 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 16:00:58 crc kubenswrapper[4713]: I0126 16:00:58.129656 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" event={"ID":"582e597a-f9be-429c-8a24-0a0dc19a9274","Type":"ContainerDied","Data":"1ba87a737e900bb0e4be19761910c7a800dd569250882d42562c65a0d57b68d3"} Jan 26 16:00:58 crc kubenswrapper[4713]: I0126 16:00:58.129955 4713 scope.go:117] "RemoveContainer" containerID="ac8c1346645601a68477e248ef0670e588886da87e96668d56a644955ec93acc" Jan 26 16:00:58 crc kubenswrapper[4713]: I0126 16:00:58.130102 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-5c65w" Jan 26 16:00:58 crc kubenswrapper[4713]: I0126 16:00:58.167919 4713 scope.go:117] "RemoveContainer" containerID="633b2a7e8e84ac5d1f7b63f963993fe076abd6a9a57482e5a8c4e94c82320c13" Jan 26 16:00:58 crc kubenswrapper[4713]: I0126 16:00:58.176424 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-5c65w"] Jan 26 16:00:58 crc kubenswrapper[4713]: I0126 16:00:58.189631 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-5c65w"] Jan 26 16:00:59 crc kubenswrapper[4713]: I0126 16:00:59.803641 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:00:59 crc kubenswrapper[4713]: E0126 16:00:59.804296 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:00:59 crc kubenswrapper[4713]: I0126 16:00:59.815921 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582e597a-f9be-429c-8a24-0a0dc19a9274" path="/var/lib/kubelet/pods/582e597a-f9be-429c-8a24-0a0dc19a9274/volumes" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.041793 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.041880 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.109505 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.172673 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490721-t2fgk"] Jan 26 16:01:00 crc kubenswrapper[4713]: E0126 16:01:00.173160 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582e597a-f9be-429c-8a24-0a0dc19a9274" containerName="dnsmasq-dns" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.173178 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="582e597a-f9be-429c-8a24-0a0dc19a9274" containerName="dnsmasq-dns" Jan 26 16:01:00 crc kubenswrapper[4713]: E0126 16:01:00.173233 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582e597a-f9be-429c-8a24-0a0dc19a9274" containerName="init" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.173240 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="582e597a-f9be-429c-8a24-0a0dc19a9274" containerName="init" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.173468 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="582e597a-f9be-429c-8a24-0a0dc19a9274" containerName="dnsmasq-dns" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.174224 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.184948 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490721-t2fgk"] Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.239167 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.272795 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-fernet-keys\") pod \"keystone-cron-29490721-t2fgk\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.272868 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-combined-ca-bundle\") pod \"keystone-cron-29490721-t2fgk\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.273052 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2lhc\" (UniqueName: \"kubernetes.io/projected/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-kube-api-access-h2lhc\") pod \"keystone-cron-29490721-t2fgk\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.273121 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-config-data\") pod \"keystone-cron-29490721-t2fgk\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.352493 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jn8sz"] Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.374995 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2lhc\" (UniqueName: \"kubernetes.io/projected/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-kube-api-access-h2lhc\") pod \"keystone-cron-29490721-t2fgk\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.375067 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-config-data\") pod \"keystone-cron-29490721-t2fgk\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.375181 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-fernet-keys\") pod \"keystone-cron-29490721-t2fgk\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.375204 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-combined-ca-bundle\") pod \"keystone-cron-29490721-t2fgk\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.380519 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-combined-ca-bundle\") pod \"keystone-cron-29490721-t2fgk\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.385450 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-config-data\") pod \"keystone-cron-29490721-t2fgk\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.388308 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-fernet-keys\") pod \"keystone-cron-29490721-t2fgk\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.390777 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2lhc\" (UniqueName: \"kubernetes.io/projected/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-kube-api-access-h2lhc\") pod \"keystone-cron-29490721-t2fgk\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:00 crc kubenswrapper[4713]: I0126 16:01:00.500293 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:01 crc kubenswrapper[4713]: I0126 16:01:01.035701 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490721-t2fgk"] Jan 26 16:01:01 crc kubenswrapper[4713]: W0126 16:01:01.035916 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5e1bc57_74ed_4f5e_a6e5_55cda8086cf1.slice/crio-b8e272d51be2355448ab31648fb2af036ba16f5b358d262556701e7585f34d5c WatchSource:0}: Error finding container b8e272d51be2355448ab31648fb2af036ba16f5b358d262556701e7585f34d5c: Status 404 returned error can't find the container with id b8e272d51be2355448ab31648fb2af036ba16f5b358d262556701e7585f34d5c Jan 26 16:01:01 crc kubenswrapper[4713]: I0126 16:01:01.167323 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-t2fgk" event={"ID":"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1","Type":"ContainerStarted","Data":"b8e272d51be2355448ab31648fb2af036ba16f5b358d262556701e7585f34d5c"} Jan 26 16:01:02 crc kubenswrapper[4713]: I0126 16:01:02.184915 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-t2fgk" event={"ID":"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1","Type":"ContainerStarted","Data":"dab849c19d7f8823ef0cdbe03a517dce34ea4ccaa380fa0f939749195cdffd1e"} Jan 26 16:01:02 crc kubenswrapper[4713]: I0126 16:01:02.185205 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jn8sz" podUID="c3d37bb7-45d1-48a9-937c-7affb875a7ca" containerName="registry-server" containerID="cri-o://41b88fc3cd70915cf3b93984288e3b27af96df6cb731504e88e12c4be1b6c798" gracePeriod=2 Jan 26 16:01:02 crc kubenswrapper[4713]: I0126 16:01:02.222150 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490721-t2fgk" podStartSLOduration=2.222125553 podStartE2EDuration="2.222125553s" podCreationTimestamp="2026-01-26 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:02.20726732 +0000 UTC m=+1637.344284575" watchObservedRunningTime="2026-01-26 16:01:02.222125553 +0000 UTC m=+1637.359142798" Jan 26 16:01:02 crc kubenswrapper[4713]: I0126 16:01:02.854056 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.028490 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d37bb7-45d1-48a9-937c-7affb875a7ca-catalog-content\") pod \"c3d37bb7-45d1-48a9-937c-7affb875a7ca\" (UID: \"c3d37bb7-45d1-48a9-937c-7affb875a7ca\") " Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.028794 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb5qv\" (UniqueName: \"kubernetes.io/projected/c3d37bb7-45d1-48a9-937c-7affb875a7ca-kube-api-access-nb5qv\") pod \"c3d37bb7-45d1-48a9-937c-7affb875a7ca\" (UID: \"c3d37bb7-45d1-48a9-937c-7affb875a7ca\") " Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.028839 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d37bb7-45d1-48a9-937c-7affb875a7ca-utilities\") pod \"c3d37bb7-45d1-48a9-937c-7affb875a7ca\" (UID: \"c3d37bb7-45d1-48a9-937c-7affb875a7ca\") " Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.029469 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3d37bb7-45d1-48a9-937c-7affb875a7ca-utilities" (OuterVolumeSpecName: "utilities") pod "c3d37bb7-45d1-48a9-937c-7affb875a7ca" (UID: "c3d37bb7-45d1-48a9-937c-7affb875a7ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.036961 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3d37bb7-45d1-48a9-937c-7affb875a7ca-kube-api-access-nb5qv" (OuterVolumeSpecName: "kube-api-access-nb5qv") pod "c3d37bb7-45d1-48a9-937c-7affb875a7ca" (UID: "c3d37bb7-45d1-48a9-937c-7affb875a7ca"). InnerVolumeSpecName "kube-api-access-nb5qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.092055 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3d37bb7-45d1-48a9-937c-7affb875a7ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3d37bb7-45d1-48a9-937c-7affb875a7ca" (UID: "c3d37bb7-45d1-48a9-937c-7affb875a7ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.131863 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d37bb7-45d1-48a9-937c-7affb875a7ca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.131899 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb5qv\" (UniqueName: \"kubernetes.io/projected/c3d37bb7-45d1-48a9-937c-7affb875a7ca-kube-api-access-nb5qv\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.131911 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d37bb7-45d1-48a9-937c-7affb875a7ca-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.202078 4713 generic.go:334] "Generic (PLEG): container finished" podID="c3d37bb7-45d1-48a9-937c-7affb875a7ca" containerID="41b88fc3cd70915cf3b93984288e3b27af96df6cb731504e88e12c4be1b6c798" exitCode=0 Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.202723 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn8sz" event={"ID":"c3d37bb7-45d1-48a9-937c-7affb875a7ca","Type":"ContainerDied","Data":"41b88fc3cd70915cf3b93984288e3b27af96df6cb731504e88e12c4be1b6c798"} Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.202770 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn8sz" event={"ID":"c3d37bb7-45d1-48a9-937c-7affb875a7ca","Type":"ContainerDied","Data":"6738ce5293afb9c8f86b2db154112686cd795722b8b8791fdaf6204e181a4d1a"} Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.202794 4713 scope.go:117] "RemoveContainer" containerID="41b88fc3cd70915cf3b93984288e3b27af96df6cb731504e88e12c4be1b6c798" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.202789 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn8sz" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.254944 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jn8sz"] Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.255546 4713 scope.go:117] "RemoveContainer" containerID="731dc77fedf4d0ab9cd0260870cba2fe5bf18ab8def95ac6d30256d89373e87a" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.264664 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jn8sz"] Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.278858 4713 scope.go:117] "RemoveContainer" containerID="c7126d2fa1106110684d9c857c1871bb4c3d9d0dc418f5cf997e503928832750" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.354719 4713 scope.go:117] "RemoveContainer" containerID="41b88fc3cd70915cf3b93984288e3b27af96df6cb731504e88e12c4be1b6c798" Jan 26 16:01:03 crc kubenswrapper[4713]: E0126 16:01:03.355251 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41b88fc3cd70915cf3b93984288e3b27af96df6cb731504e88e12c4be1b6c798\": container with ID starting with 41b88fc3cd70915cf3b93984288e3b27af96df6cb731504e88e12c4be1b6c798 not found: ID does not exist" containerID="41b88fc3cd70915cf3b93984288e3b27af96df6cb731504e88e12c4be1b6c798" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.355288 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41b88fc3cd70915cf3b93984288e3b27af96df6cb731504e88e12c4be1b6c798"} err="failed to get container status \"41b88fc3cd70915cf3b93984288e3b27af96df6cb731504e88e12c4be1b6c798\": rpc error: code = NotFound desc = could not find container \"41b88fc3cd70915cf3b93984288e3b27af96df6cb731504e88e12c4be1b6c798\": container with ID starting with 41b88fc3cd70915cf3b93984288e3b27af96df6cb731504e88e12c4be1b6c798 not found: ID does not exist" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.355311 4713 scope.go:117] "RemoveContainer" containerID="731dc77fedf4d0ab9cd0260870cba2fe5bf18ab8def95ac6d30256d89373e87a" Jan 26 16:01:03 crc kubenswrapper[4713]: E0126 16:01:03.355520 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"731dc77fedf4d0ab9cd0260870cba2fe5bf18ab8def95ac6d30256d89373e87a\": container with ID starting with 731dc77fedf4d0ab9cd0260870cba2fe5bf18ab8def95ac6d30256d89373e87a not found: ID does not exist" containerID="731dc77fedf4d0ab9cd0260870cba2fe5bf18ab8def95ac6d30256d89373e87a" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.355538 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"731dc77fedf4d0ab9cd0260870cba2fe5bf18ab8def95ac6d30256d89373e87a"} err="failed to get container status \"731dc77fedf4d0ab9cd0260870cba2fe5bf18ab8def95ac6d30256d89373e87a\": rpc error: code = NotFound desc = could not find container \"731dc77fedf4d0ab9cd0260870cba2fe5bf18ab8def95ac6d30256d89373e87a\": container with ID starting with 731dc77fedf4d0ab9cd0260870cba2fe5bf18ab8def95ac6d30256d89373e87a not found: ID does not exist" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.355549 4713 scope.go:117] "RemoveContainer" containerID="c7126d2fa1106110684d9c857c1871bb4c3d9d0dc418f5cf997e503928832750" Jan 26 16:01:03 crc kubenswrapper[4713]: E0126 16:01:03.355774 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7126d2fa1106110684d9c857c1871bb4c3d9d0dc418f5cf997e503928832750\": container with ID starting with c7126d2fa1106110684d9c857c1871bb4c3d9d0dc418f5cf997e503928832750 not found: ID does not exist" containerID="c7126d2fa1106110684d9c857c1871bb4c3d9d0dc418f5cf997e503928832750" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.355804 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7126d2fa1106110684d9c857c1871bb4c3d9d0dc418f5cf997e503928832750"} err="failed to get container status \"c7126d2fa1106110684d9c857c1871bb4c3d9d0dc418f5cf997e503928832750\": rpc error: code = NotFound desc = could not find container \"c7126d2fa1106110684d9c857c1871bb4c3d9d0dc418f5cf997e503928832750\": container with ID starting with c7126d2fa1106110684d9c857c1871bb4c3d9d0dc418f5cf997e503928832750 not found: ID does not exist" Jan 26 16:01:03 crc kubenswrapper[4713]: I0126 16:01:03.826284 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3d37bb7-45d1-48a9-937c-7affb875a7ca" path="/var/lib/kubelet/pods/c3d37bb7-45d1-48a9-937c-7affb875a7ca/volumes" Jan 26 16:01:04 crc kubenswrapper[4713]: I0126 16:01:04.217324 4713 generic.go:334] "Generic (PLEG): container finished" podID="b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1" containerID="dab849c19d7f8823ef0cdbe03a517dce34ea4ccaa380fa0f939749195cdffd1e" exitCode=0 Jan 26 16:01:04 crc kubenswrapper[4713]: I0126 16:01:04.217430 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-t2fgk" event={"ID":"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1","Type":"ContainerDied","Data":"dab849c19d7f8823ef0cdbe03a517dce34ea4ccaa380fa0f939749195cdffd1e"} Jan 26 16:01:05 crc kubenswrapper[4713]: E0126 16:01:05.677932 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc56be499_f359_4178_a9b2_df69f97d684f.slice/crio-conmon-6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:01:05 crc kubenswrapper[4713]: I0126 16:01:05.693733 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:05 crc kubenswrapper[4713]: I0126 16:01:05.792259 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-combined-ca-bundle\") pod \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " Jan 26 16:01:05 crc kubenswrapper[4713]: I0126 16:01:05.792422 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2lhc\" (UniqueName: \"kubernetes.io/projected/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-kube-api-access-h2lhc\") pod \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " Jan 26 16:01:05 crc kubenswrapper[4713]: I0126 16:01:05.792564 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-fernet-keys\") pod \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " Jan 26 16:01:05 crc kubenswrapper[4713]: I0126 16:01:05.792596 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-config-data\") pod \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\" (UID: \"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1\") " Jan 26 16:01:05 crc kubenswrapper[4713]: I0126 16:01:05.798201 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1" (UID: "b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:05 crc kubenswrapper[4713]: I0126 16:01:05.798934 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-kube-api-access-h2lhc" (OuterVolumeSpecName: "kube-api-access-h2lhc") pod "b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1" (UID: "b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1"). InnerVolumeSpecName "kube-api-access-h2lhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:05 crc kubenswrapper[4713]: I0126 16:01:05.830386 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1" (UID: "b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:05 crc kubenswrapper[4713]: I0126 16:01:05.852163 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-config-data" (OuterVolumeSpecName: "config-data") pod "b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1" (UID: "b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:05 crc kubenswrapper[4713]: I0126 16:01:05.895599 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2lhc\" (UniqueName: \"kubernetes.io/projected/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-kube-api-access-h2lhc\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:05 crc kubenswrapper[4713]: I0126 16:01:05.895658 4713 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:05 crc kubenswrapper[4713]: I0126 16:01:05.895668 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:05 crc kubenswrapper[4713]: I0126 16:01:05.895677 4713 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:06 crc kubenswrapper[4713]: I0126 16:01:06.246666 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-t2fgk" event={"ID":"b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1","Type":"ContainerDied","Data":"b8e272d51be2355448ab31648fb2af036ba16f5b358d262556701e7585f34d5c"} Jan 26 16:01:06 crc kubenswrapper[4713]: I0126 16:01:06.246760 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8e272d51be2355448ab31648fb2af036ba16f5b358d262556701e7585f34d5c" Jan 26 16:01:06 crc kubenswrapper[4713]: I0126 16:01:06.246769 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-t2fgk" Jan 26 16:01:09 crc kubenswrapper[4713]: I0126 16:01:09.282624 4713 generic.go:334] "Generic (PLEG): container finished" podID="36f2aa2e-c567-4d86-b3d6-c3572a45ccd1" containerID="e132586e3ce252e75dac97babe5d505eb17e32b9d424d709d351ff119a8a4618" exitCode=0 Jan 26 16:01:09 crc kubenswrapper[4713]: I0126 16:01:09.282785 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1","Type":"ContainerDied","Data":"e132586e3ce252e75dac97babe5d505eb17e32b9d424d709d351ff119a8a4618"} Jan 26 16:01:09 crc kubenswrapper[4713]: I0126 16:01:09.284992 4713 generic.go:334] "Generic (PLEG): container finished" podID="43b98a31-5771-411a-b08d-1c3f17c50a4d" containerID="d439fb1ca311e64b11d204702520e845843c634c860617feaa8b23061aef8323" exitCode=0 Jan 26 16:01:09 crc kubenswrapper[4713]: I0126 16:01:09.285017 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"43b98a31-5771-411a-b08d-1c3f17c50a4d","Type":"ContainerDied","Data":"d439fb1ca311e64b11d204702520e845843c634c860617feaa8b23061aef8323"} Jan 26 16:01:10 crc kubenswrapper[4713]: I0126 16:01:10.297438 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"36f2aa2e-c567-4d86-b3d6-c3572a45ccd1","Type":"ContainerStarted","Data":"ef47d2a38cb6bdae53ef5e180a8ebcb3ddc91a5def1ccc01770d955b3dbe6bfd"} Jan 26 16:01:10 crc kubenswrapper[4713]: I0126 16:01:10.298585 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 16:01:10 crc kubenswrapper[4713]: I0126 16:01:10.301445 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"43b98a31-5771-411a-b08d-1c3f17c50a4d","Type":"ContainerStarted","Data":"c52140cd6709adec62bb8d4ed5ff2b0e46b3de0f41b811c16fa0018f80371909"} Jan 26 16:01:10 crc kubenswrapper[4713]: I0126 16:01:10.301662 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:01:10 crc kubenswrapper[4713]: I0126 16:01:10.330067 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.330049133 podStartE2EDuration="37.330049133s" podCreationTimestamp="2026-01-26 16:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:10.327249664 +0000 UTC m=+1645.464266919" watchObservedRunningTime="2026-01-26 16:01:10.330049133 +0000 UTC m=+1645.467066368" Jan 26 16:01:10 crc kubenswrapper[4713]: I0126 16:01:10.364748 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.36472952 podStartE2EDuration="37.36472952s" podCreationTimestamp="2026-01-26 16:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:10.35349779 +0000 UTC m=+1645.490515025" watchObservedRunningTime="2026-01-26 16:01:10.36472952 +0000 UTC m=+1645.501746755" Jan 26 16:01:10 crc kubenswrapper[4713]: I0126 16:01:10.804454 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:01:10 crc kubenswrapper[4713]: E0126 16:01:10.804976 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.149186 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv"] Jan 26 16:01:13 crc kubenswrapper[4713]: E0126 16:01:13.150199 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d37bb7-45d1-48a9-937c-7affb875a7ca" containerName="registry-server" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.150216 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d37bb7-45d1-48a9-937c-7affb875a7ca" containerName="registry-server" Jan 26 16:01:13 crc kubenswrapper[4713]: E0126 16:01:13.150233 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d37bb7-45d1-48a9-937c-7affb875a7ca" containerName="extract-utilities" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.150239 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d37bb7-45d1-48a9-937c-7affb875a7ca" containerName="extract-utilities" Jan 26 16:01:13 crc kubenswrapper[4713]: E0126 16:01:13.150248 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1" containerName="keystone-cron" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.150255 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1" containerName="keystone-cron" Jan 26 16:01:13 crc kubenswrapper[4713]: E0126 16:01:13.150275 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d37bb7-45d1-48a9-937c-7affb875a7ca" containerName="extract-content" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.150283 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d37bb7-45d1-48a9-937c-7affb875a7ca" containerName="extract-content" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.150534 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3d37bb7-45d1-48a9-937c-7affb875a7ca" containerName="registry-server" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.150556 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1" containerName="keystone-cron" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.151414 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:13 crc kubenswrapper[4713]: W0126 16:01:13.164330 4713 reflector.go:561] object-"openstack"/"openstack-aee-default-env": failed to list *v1.ConfigMap: configmaps "openstack-aee-default-env" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 26 16:01:13 crc kubenswrapper[4713]: E0126 16:01:13.164497 4713 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-aee-default-env\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openstack-aee-default-env\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 16:01:13 crc kubenswrapper[4713]: W0126 16:01:13.164519 4713 reflector.go:561] object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5": failed to list *v1.Secret: secrets "openstack-edpm-ipam-dockercfg-xs5x5" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 26 16:01:13 crc kubenswrapper[4713]: E0126 16:01:13.164556 4713 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-edpm-ipam-dockercfg-xs5x5\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openstack-edpm-ipam-dockercfg-xs5x5\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 16:01:13 crc kubenswrapper[4713]: W0126 16:01:13.164564 4713 reflector.go:561] object-"openstack"/"dataplane-ansible-ssh-private-key-secret": failed to list *v1.Secret: secrets "dataplane-ansible-ssh-private-key-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 26 16:01:13 crc kubenswrapper[4713]: E0126 16:01:13.164584 4713 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dataplane-ansible-ssh-private-key-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"dataplane-ansible-ssh-private-key-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 16:01:13 crc kubenswrapper[4713]: W0126 16:01:13.164622 4713 reflector.go:561] object-"openstack"/"dataplanenodeset-openstack-edpm-ipam": failed to list *v1.Secret: secrets "dataplanenodeset-openstack-edpm-ipam" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 26 16:01:13 crc kubenswrapper[4713]: E0126 16:01:13.164644 4713 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dataplanenodeset-openstack-edpm-ipam\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"dataplanenodeset-openstack-edpm-ipam\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.183204 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv"] Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.254442 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.254509 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.254560 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.254638 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9r8d\" (UniqueName: \"kubernetes.io/projected/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-kube-api-access-j9r8d\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.356704 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.356780 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.356825 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.356893 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9r8d\" (UniqueName: \"kubernetes.io/projected/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-kube-api-access-j9r8d\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.379119 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:13 crc kubenswrapper[4713]: I0126 16:01:13.383656 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9r8d\" (UniqueName: \"kubernetes.io/projected/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-kube-api-access-j9r8d\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:14 crc kubenswrapper[4713]: I0126 16:01:14.244966 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:01:14 crc kubenswrapper[4713]: I0126 16:01:14.255493 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:14 crc kubenswrapper[4713]: E0126 16:01:14.357981 4713 secret.go:188] Couldn't get secret openstack/dataplanenodeset-openstack-edpm-ipam: failed to sync secret cache: timed out waiting for the condition Jan 26 16:01:14 crc kubenswrapper[4713]: E0126 16:01:14.358074 4713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-inventory podName:1bb061f5-90cb-4f19-a0e4-3fd295a232a2 nodeName:}" failed. No retries permitted until 2026-01-26 16:01:14.858052406 +0000 UTC m=+1649.995069651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "inventory" (UniqueName: "kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-inventory") pod "repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" (UID: "1bb061f5-90cb-4f19-a0e4-3fd295a232a2") : failed to sync secret cache: timed out waiting for the condition Jan 26 16:01:14 crc kubenswrapper[4713]: I0126 16:01:14.552380 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:01:14 crc kubenswrapper[4713]: I0126 16:01:14.552893 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:01:14 crc kubenswrapper[4713]: I0126 16:01:14.657724 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:01:14 crc kubenswrapper[4713]: I0126 16:01:14.887978 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:14 crc kubenswrapper[4713]: I0126 16:01:14.894289 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:14 crc kubenswrapper[4713]: I0126 16:01:14.984314 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:15 crc kubenswrapper[4713]: I0126 16:01:15.609173 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv"] Jan 26 16:01:15 crc kubenswrapper[4713]: E0126 16:01:15.936352 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc56be499_f359_4178_a9b2_df69f97d684f.slice/crio-conmon-6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:01:16 crc kubenswrapper[4713]: I0126 16:01:16.372468 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" event={"ID":"1bb061f5-90cb-4f19-a0e4-3fd295a232a2","Type":"ContainerStarted","Data":"a06dd51144438558e0d1a84045d3f75ff8476f1ea7649e92520a219346811f62"} Jan 26 16:01:17 crc kubenswrapper[4713]: I0126 16:01:17.003949 4713 scope.go:117] "RemoveContainer" containerID="2d2694a0761ce2e6e1e2fe2588aeaf4de2a502634a8bce309350b7433867044e" Jan 26 16:01:19 crc kubenswrapper[4713]: I0126 16:01:19.633041 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Jan 26 16:01:23 crc kubenswrapper[4713]: I0126 16:01:23.806471 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:01:23 crc kubenswrapper[4713]: E0126 16:01:23.807079 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:01:24 crc kubenswrapper[4713]: I0126 16:01:24.659853 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:01:24 crc kubenswrapper[4713]: I0126 16:01:24.706527 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 16:01:26 crc kubenswrapper[4713]: E0126 16:01:26.250564 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc56be499_f359_4178_a9b2_df69f97d684f.slice/crio-conmon-6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:01:27 crc kubenswrapper[4713]: I0126 16:01:27.487712 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" event={"ID":"1bb061f5-90cb-4f19-a0e4-3fd295a232a2","Type":"ContainerStarted","Data":"c2955a9d96d8a7ae713b801060cd27e0b7638abe3a101f8ba59d3223bba3b15b"} Jan 26 16:01:27 crc kubenswrapper[4713]: I0126 16:01:27.506388 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" podStartSLOduration=3.409601664 podStartE2EDuration="14.506339086s" podCreationTimestamp="2026-01-26 16:01:13 +0000 UTC" firstStartedPulling="2026-01-26 16:01:15.623973823 +0000 UTC m=+1650.760991098" lastFinishedPulling="2026-01-26 16:01:26.720711285 +0000 UTC m=+1661.857728520" observedRunningTime="2026-01-26 16:01:27.500399217 +0000 UTC m=+1662.637416492" watchObservedRunningTime="2026-01-26 16:01:27.506339086 +0000 UTC m=+1662.643356331" Jan 26 16:01:34 crc kubenswrapper[4713]: I0126 16:01:34.804215 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:01:34 crc kubenswrapper[4713]: E0126 16:01:34.805082 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:01:36 crc kubenswrapper[4713]: E0126 16:01:36.516714 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc56be499_f359_4178_a9b2_df69f97d684f.slice/crio-conmon-6a400e4cf15917e484e954f54c6ef5e7a1602bfd5e74ed1834f653934690e537.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:01:38 crc kubenswrapper[4713]: I0126 16:01:38.632296 4713 generic.go:334] "Generic (PLEG): container finished" podID="1bb061f5-90cb-4f19-a0e4-3fd295a232a2" containerID="c2955a9d96d8a7ae713b801060cd27e0b7638abe3a101f8ba59d3223bba3b15b" exitCode=0 Jan 26 16:01:38 crc kubenswrapper[4713]: I0126 16:01:38.632416 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" event={"ID":"1bb061f5-90cb-4f19-a0e4-3fd295a232a2","Type":"ContainerDied","Data":"c2955a9d96d8a7ae713b801060cd27e0b7638abe3a101f8ba59d3223bba3b15b"} Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.241967 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.435569 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-repo-setup-combined-ca-bundle\") pod \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.435639 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9r8d\" (UniqueName: \"kubernetes.io/projected/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-kube-api-access-j9r8d\") pod \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.435704 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-ssh-key-openstack-edpm-ipam\") pod \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.435803 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-inventory\") pod \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\" (UID: \"1bb061f5-90cb-4f19-a0e4-3fd295a232a2\") " Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.451023 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-kube-api-access-j9r8d" (OuterVolumeSpecName: "kube-api-access-j9r8d") pod "1bb061f5-90cb-4f19-a0e4-3fd295a232a2" (UID: "1bb061f5-90cb-4f19-a0e4-3fd295a232a2"). InnerVolumeSpecName "kube-api-access-j9r8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.451637 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1bb061f5-90cb-4f19-a0e4-3fd295a232a2" (UID: "1bb061f5-90cb-4f19-a0e4-3fd295a232a2"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.464892 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-inventory" (OuterVolumeSpecName: "inventory") pod "1bb061f5-90cb-4f19-a0e4-3fd295a232a2" (UID: "1bb061f5-90cb-4f19-a0e4-3fd295a232a2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.468698 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1bb061f5-90cb-4f19-a0e4-3fd295a232a2" (UID: "1bb061f5-90cb-4f19-a0e4-3fd295a232a2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.539041 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.539095 4713 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.539118 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9r8d\" (UniqueName: \"kubernetes.io/projected/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-kube-api-access-j9r8d\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.539138 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1bb061f5-90cb-4f19-a0e4-3fd295a232a2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.664074 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" event={"ID":"1bb061f5-90cb-4f19-a0e4-3fd295a232a2","Type":"ContainerDied","Data":"a06dd51144438558e0d1a84045d3f75ff8476f1ea7649e92520a219346811f62"} Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.664392 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a06dd51144438558e0d1a84045d3f75ff8476f1ea7649e92520a219346811f62" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.664193 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.764133 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2"] Jan 26 16:01:40 crc kubenswrapper[4713]: E0126 16:01:40.764634 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb061f5-90cb-4f19-a0e4-3fd295a232a2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.764653 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb061f5-90cb-4f19-a0e4-3fd295a232a2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.764835 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bb061f5-90cb-4f19-a0e4-3fd295a232a2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.765553 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.768121 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.780152 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.780530 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.781642 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.799240 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2"] Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.948119 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca744311-cd43-444e-ba20-ad3a2e26a7a4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d66m2\" (UID: \"ca744311-cd43-444e-ba20-ad3a2e26a7a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.948300 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca744311-cd43-444e-ba20-ad3a2e26a7a4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d66m2\" (UID: \"ca744311-cd43-444e-ba20-ad3a2e26a7a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" Jan 26 16:01:40 crc kubenswrapper[4713]: I0126 16:01:40.948344 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbpfm\" (UniqueName: \"kubernetes.io/projected/ca744311-cd43-444e-ba20-ad3a2e26a7a4-kube-api-access-zbpfm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d66m2\" (UID: \"ca744311-cd43-444e-ba20-ad3a2e26a7a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" Jan 26 16:01:41 crc kubenswrapper[4713]: I0126 16:01:41.050532 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca744311-cd43-444e-ba20-ad3a2e26a7a4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d66m2\" (UID: \"ca744311-cd43-444e-ba20-ad3a2e26a7a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" Jan 26 16:01:41 crc kubenswrapper[4713]: I0126 16:01:41.050815 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca744311-cd43-444e-ba20-ad3a2e26a7a4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d66m2\" (UID: \"ca744311-cd43-444e-ba20-ad3a2e26a7a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" Jan 26 16:01:41 crc kubenswrapper[4713]: I0126 16:01:41.050902 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbpfm\" (UniqueName: \"kubernetes.io/projected/ca744311-cd43-444e-ba20-ad3a2e26a7a4-kube-api-access-zbpfm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d66m2\" (UID: \"ca744311-cd43-444e-ba20-ad3a2e26a7a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" Jan 26 16:01:41 crc kubenswrapper[4713]: I0126 16:01:41.056390 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca744311-cd43-444e-ba20-ad3a2e26a7a4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d66m2\" (UID: \"ca744311-cd43-444e-ba20-ad3a2e26a7a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" Jan 26 16:01:41 crc kubenswrapper[4713]: I0126 16:01:41.059888 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca744311-cd43-444e-ba20-ad3a2e26a7a4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d66m2\" (UID: \"ca744311-cd43-444e-ba20-ad3a2e26a7a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" Jan 26 16:01:41 crc kubenswrapper[4713]: I0126 16:01:41.072522 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbpfm\" (UniqueName: \"kubernetes.io/projected/ca744311-cd43-444e-ba20-ad3a2e26a7a4-kube-api-access-zbpfm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d66m2\" (UID: \"ca744311-cd43-444e-ba20-ad3a2e26a7a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" Jan 26 16:01:41 crc kubenswrapper[4713]: I0126 16:01:41.093896 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" Jan 26 16:01:41 crc kubenswrapper[4713]: I0126 16:01:41.699984 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2"] Jan 26 16:01:41 crc kubenswrapper[4713]: I0126 16:01:41.705507 4713 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:01:42 crc kubenswrapper[4713]: I0126 16:01:42.688951 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" event={"ID":"ca744311-cd43-444e-ba20-ad3a2e26a7a4","Type":"ContainerStarted","Data":"39c3a4ea12e22de5ad4ad8b8c958a7f28978a088b9ba04a537fe5b105401a1ee"} Jan 26 16:01:42 crc kubenswrapper[4713]: I0126 16:01:42.689414 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" event={"ID":"ca744311-cd43-444e-ba20-ad3a2e26a7a4","Type":"ContainerStarted","Data":"f071095b3e0f46c0d146f367d48e0d0b1e4032dfe08191085b038cc1113a27ac"} Jan 26 16:01:42 crc kubenswrapper[4713]: I0126 16:01:42.720889 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" podStartSLOduration=2.27840136 podStartE2EDuration="2.720868052s" podCreationTimestamp="2026-01-26 16:01:40 +0000 UTC" firstStartedPulling="2026-01-26 16:01:41.705153449 +0000 UTC m=+1676.842170694" lastFinishedPulling="2026-01-26 16:01:42.147620141 +0000 UTC m=+1677.284637386" observedRunningTime="2026-01-26 16:01:42.705922617 +0000 UTC m=+1677.842939852" watchObservedRunningTime="2026-01-26 16:01:42.720868052 +0000 UTC m=+1677.857885297" Jan 26 16:01:45 crc kubenswrapper[4713]: I0126 16:01:45.739422 4713 generic.go:334] "Generic (PLEG): container finished" podID="ca744311-cd43-444e-ba20-ad3a2e26a7a4" containerID="39c3a4ea12e22de5ad4ad8b8c958a7f28978a088b9ba04a537fe5b105401a1ee" exitCode=0 Jan 26 16:01:45 crc kubenswrapper[4713]: I0126 16:01:45.739973 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" event={"ID":"ca744311-cd43-444e-ba20-ad3a2e26a7a4","Type":"ContainerDied","Data":"39c3a4ea12e22de5ad4ad8b8c958a7f28978a088b9ba04a537fe5b105401a1ee"} Jan 26 16:01:46 crc kubenswrapper[4713]: I0126 16:01:46.804043 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:01:46 crc kubenswrapper[4713]: E0126 16:01:46.804595 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.302724 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.408880 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca744311-cd43-444e-ba20-ad3a2e26a7a4-ssh-key-openstack-edpm-ipam\") pod \"ca744311-cd43-444e-ba20-ad3a2e26a7a4\" (UID: \"ca744311-cd43-444e-ba20-ad3a2e26a7a4\") " Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.409021 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca744311-cd43-444e-ba20-ad3a2e26a7a4-inventory\") pod \"ca744311-cd43-444e-ba20-ad3a2e26a7a4\" (UID: \"ca744311-cd43-444e-ba20-ad3a2e26a7a4\") " Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.409113 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbpfm\" (UniqueName: \"kubernetes.io/projected/ca744311-cd43-444e-ba20-ad3a2e26a7a4-kube-api-access-zbpfm\") pod \"ca744311-cd43-444e-ba20-ad3a2e26a7a4\" (UID: \"ca744311-cd43-444e-ba20-ad3a2e26a7a4\") " Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.415012 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca744311-cd43-444e-ba20-ad3a2e26a7a4-kube-api-access-zbpfm" (OuterVolumeSpecName: "kube-api-access-zbpfm") pod "ca744311-cd43-444e-ba20-ad3a2e26a7a4" (UID: "ca744311-cd43-444e-ba20-ad3a2e26a7a4"). InnerVolumeSpecName "kube-api-access-zbpfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.445691 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca744311-cd43-444e-ba20-ad3a2e26a7a4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ca744311-cd43-444e-ba20-ad3a2e26a7a4" (UID: "ca744311-cd43-444e-ba20-ad3a2e26a7a4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.450272 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca744311-cd43-444e-ba20-ad3a2e26a7a4-inventory" (OuterVolumeSpecName: "inventory") pod "ca744311-cd43-444e-ba20-ad3a2e26a7a4" (UID: "ca744311-cd43-444e-ba20-ad3a2e26a7a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.512870 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbpfm\" (UniqueName: \"kubernetes.io/projected/ca744311-cd43-444e-ba20-ad3a2e26a7a4-kube-api-access-zbpfm\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.512928 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca744311-cd43-444e-ba20-ad3a2e26a7a4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.512955 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca744311-cd43-444e-ba20-ad3a2e26a7a4-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.776543 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" event={"ID":"ca744311-cd43-444e-ba20-ad3a2e26a7a4","Type":"ContainerDied","Data":"f071095b3e0f46c0d146f367d48e0d0b1e4032dfe08191085b038cc1113a27ac"} Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.777107 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f071095b3e0f46c0d146f367d48e0d0b1e4032dfe08191085b038cc1113a27ac" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.776680 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d66m2" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.898452 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7"] Jan 26 16:01:47 crc kubenswrapper[4713]: E0126 16:01:47.899163 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca744311-cd43-444e-ba20-ad3a2e26a7a4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.899185 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca744311-cd43-444e-ba20-ad3a2e26a7a4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.899625 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca744311-cd43-444e-ba20-ad3a2e26a7a4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.900876 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.905160 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.905200 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.905436 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.905560 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:01:47 crc kubenswrapper[4713]: I0126 16:01:47.915924 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7"] Jan 26 16:01:48 crc kubenswrapper[4713]: I0126 16:01:48.026295 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:01:48 crc kubenswrapper[4713]: I0126 16:01:48.026373 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:01:48 crc kubenswrapper[4713]: I0126 16:01:48.026536 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6ff4\" (UniqueName: \"kubernetes.io/projected/3221883d-48d9-4953-aeba-4969c3ea1ed9-kube-api-access-d6ff4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:01:48 crc kubenswrapper[4713]: I0126 16:01:48.027081 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:01:48 crc kubenswrapper[4713]: I0126 16:01:48.129075 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:01:48 crc kubenswrapper[4713]: I0126 16:01:48.129171 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:01:48 crc kubenswrapper[4713]: I0126 16:01:48.129201 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:01:48 crc kubenswrapper[4713]: I0126 16:01:48.129224 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6ff4\" (UniqueName: \"kubernetes.io/projected/3221883d-48d9-4953-aeba-4969c3ea1ed9-kube-api-access-d6ff4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:01:48 crc kubenswrapper[4713]: I0126 16:01:48.134926 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:01:48 crc kubenswrapper[4713]: I0126 16:01:48.136065 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:01:48 crc kubenswrapper[4713]: I0126 16:01:48.143568 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:01:48 crc kubenswrapper[4713]: I0126 16:01:48.148052 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6ff4\" (UniqueName: \"kubernetes.io/projected/3221883d-48d9-4953-aeba-4969c3ea1ed9-kube-api-access-d6ff4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:01:48 crc kubenswrapper[4713]: I0126 16:01:48.239443 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:01:48 crc kubenswrapper[4713]: I0126 16:01:48.899684 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7"] Jan 26 16:01:49 crc kubenswrapper[4713]: I0126 16:01:49.829764 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" event={"ID":"3221883d-48d9-4953-aeba-4969c3ea1ed9","Type":"ContainerStarted","Data":"abc38730400a507c610bdcb8d8c5bd0f55c222344319d309b451f337b365c5f9"} Jan 26 16:01:50 crc kubenswrapper[4713]: I0126 16:01:50.854384 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" event={"ID":"3221883d-48d9-4953-aeba-4969c3ea1ed9","Type":"ContainerStarted","Data":"813c48fd5a0684259f3d5c373e3e0e3dc0b5912ff4ac71bd7dc15ebb309db4eb"} Jan 26 16:01:50 crc kubenswrapper[4713]: I0126 16:01:50.882978 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" podStartSLOduration=3.212943181 podStartE2EDuration="3.882953624s" podCreationTimestamp="2026-01-26 16:01:47 +0000 UTC" firstStartedPulling="2026-01-26 16:01:48.912793479 +0000 UTC m=+1684.049810754" lastFinishedPulling="2026-01-26 16:01:49.582803952 +0000 UTC m=+1684.719821197" observedRunningTime="2026-01-26 16:01:50.879882466 +0000 UTC m=+1686.016899731" watchObservedRunningTime="2026-01-26 16:01:50.882953624 +0000 UTC m=+1686.019970869" Jan 26 16:01:59 crc kubenswrapper[4713]: I0126 16:01:59.803412 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:01:59 crc kubenswrapper[4713]: E0126 16:01:59.804036 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:02:10 crc kubenswrapper[4713]: I0126 16:02:10.805108 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:02:10 crc kubenswrapper[4713]: E0126 16:02:10.806702 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:02:17 crc kubenswrapper[4713]: I0126 16:02:17.217024 4713 scope.go:117] "RemoveContainer" containerID="2c70e016d1bc30e3060cc7d05817c6a625a971d666abcb4e8ac2dd9715da0525" Jan 26 16:02:17 crc kubenswrapper[4713]: I0126 16:02:17.245935 4713 scope.go:117] "RemoveContainer" containerID="4e8f2f653f219380db269096b53ae8d320a6da06d20fe12f7e3726109ac70c9f" Jan 26 16:02:17 crc kubenswrapper[4713]: I0126 16:02:17.345300 4713 scope.go:117] "RemoveContainer" containerID="a910425c6e3b6c49990aceb6d5ab231979abd0c63f0f92331b353feafd50e5d9" Jan 26 16:02:17 crc kubenswrapper[4713]: I0126 16:02:17.370307 4713 scope.go:117] "RemoveContainer" containerID="029a64d69c193541304d83d822c19ab76ceb4a9fe8895f296a338c41d9385997" Jan 26 16:02:17 crc kubenswrapper[4713]: I0126 16:02:17.421579 4713 scope.go:117] "RemoveContainer" containerID="c5efe2fac3f90330afe61bdf4cf24d9be4a85ebc581b70ee00d028ac557124fe" Jan 26 16:02:25 crc kubenswrapper[4713]: I0126 16:02:25.818307 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:02:25 crc kubenswrapper[4713]: E0126 16:02:25.819426 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:02:39 crc kubenswrapper[4713]: I0126 16:02:39.804732 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:02:39 crc kubenswrapper[4713]: E0126 16:02:39.806083 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:02:50 crc kubenswrapper[4713]: I0126 16:02:50.803730 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:02:50 crc kubenswrapper[4713]: E0126 16:02:50.804988 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:03:02 crc kubenswrapper[4713]: I0126 16:03:02.841301 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:03:02 crc kubenswrapper[4713]: E0126 16:03:02.843417 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:03:13 crc kubenswrapper[4713]: I0126 16:03:13.806551 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:03:13 crc kubenswrapper[4713]: E0126 16:03:13.807223 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:03:17 crc kubenswrapper[4713]: I0126 16:03:17.546344 4713 scope.go:117] "RemoveContainer" containerID="47186be8e371abfd58579a55641771c689f57161c91fe3344935533e40ab24a7" Jan 26 16:03:17 crc kubenswrapper[4713]: I0126 16:03:17.585672 4713 scope.go:117] "RemoveContainer" containerID="7c930878284e055cb18e895a81de72fa3a3e28db807f621e9841129b8b204561" Jan 26 16:03:17 crc kubenswrapper[4713]: I0126 16:03:17.635854 4713 scope.go:117] "RemoveContainer" containerID="a7a69157a1fa955305fbbd18bd17942a335936248b8cc2f8540cd03b77aec694" Jan 26 16:03:17 crc kubenswrapper[4713]: I0126 16:03:17.666960 4713 scope.go:117] "RemoveContainer" containerID="c0c52f7042da6f8e751abae1da29cad7eaa249c53338867e8b994a88edfdf4ff" Jan 26 16:03:17 crc kubenswrapper[4713]: I0126 16:03:17.727869 4713 scope.go:117] "RemoveContainer" containerID="17c989c4a47c7806a9ea9dba3c0d2bf1c32390f084f841f3d50c0129b0207d35" Jan 26 16:03:17 crc kubenswrapper[4713]: I0126 16:03:17.763532 4713 scope.go:117] "RemoveContainer" containerID="076a4eb5adb1b98c64b8bdac5b8dd9c53fa332d03927d59c78a995aaf3a4c5cc" Jan 26 16:03:17 crc kubenswrapper[4713]: I0126 16:03:17.820349 4713 scope.go:117] "RemoveContainer" containerID="15260fdb61fe922d7a3e4ac66956ad3d99063b560c58ab4fde03cd84fa57e7f3" Jan 26 16:03:28 crc kubenswrapper[4713]: I0126 16:03:28.804206 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:03:28 crc kubenswrapper[4713]: E0126 16:03:28.804976 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:03:39 crc kubenswrapper[4713]: I0126 16:03:39.804570 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:03:39 crc kubenswrapper[4713]: E0126 16:03:39.810825 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:03:54 crc kubenswrapper[4713]: I0126 16:03:54.804180 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:03:54 crc kubenswrapper[4713]: E0126 16:03:54.804940 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:04:05 crc kubenswrapper[4713]: I0126 16:04:05.810739 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:04:05 crc kubenswrapper[4713]: E0126 16:04:05.811583 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:04:14 crc kubenswrapper[4713]: I0126 16:04:14.411102 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mskm2"] Jan 26 16:04:14 crc kubenswrapper[4713]: I0126 16:04:14.414388 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:14 crc kubenswrapper[4713]: I0126 16:04:14.431982 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mskm2"] Jan 26 16:04:14 crc kubenswrapper[4713]: I0126 16:04:14.433024 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0af35c0-0558-4342-b066-aadca093fb08-utilities\") pod \"certified-operators-mskm2\" (UID: \"d0af35c0-0558-4342-b066-aadca093fb08\") " pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:14 crc kubenswrapper[4713]: I0126 16:04:14.433597 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0af35c0-0558-4342-b066-aadca093fb08-catalog-content\") pod \"certified-operators-mskm2\" (UID: \"d0af35c0-0558-4342-b066-aadca093fb08\") " pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:14 crc kubenswrapper[4713]: I0126 16:04:14.435552 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c8c2\" (UniqueName: \"kubernetes.io/projected/d0af35c0-0558-4342-b066-aadca093fb08-kube-api-access-7c8c2\") pod \"certified-operators-mskm2\" (UID: \"d0af35c0-0558-4342-b066-aadca093fb08\") " pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:14 crc kubenswrapper[4713]: I0126 16:04:14.538595 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0af35c0-0558-4342-b066-aadca093fb08-utilities\") pod \"certified-operators-mskm2\" (UID: \"d0af35c0-0558-4342-b066-aadca093fb08\") " pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:14 crc kubenswrapper[4713]: I0126 16:04:14.539102 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0af35c0-0558-4342-b066-aadca093fb08-utilities\") pod \"certified-operators-mskm2\" (UID: \"d0af35c0-0558-4342-b066-aadca093fb08\") " pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:14 crc kubenswrapper[4713]: I0126 16:04:14.539141 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0af35c0-0558-4342-b066-aadca093fb08-catalog-content\") pod \"certified-operators-mskm2\" (UID: \"d0af35c0-0558-4342-b066-aadca093fb08\") " pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:14 crc kubenswrapper[4713]: I0126 16:04:14.540137 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0af35c0-0558-4342-b066-aadca093fb08-catalog-content\") pod \"certified-operators-mskm2\" (UID: \"d0af35c0-0558-4342-b066-aadca093fb08\") " pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:14 crc kubenswrapper[4713]: I0126 16:04:14.540138 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c8c2\" (UniqueName: \"kubernetes.io/projected/d0af35c0-0558-4342-b066-aadca093fb08-kube-api-access-7c8c2\") pod \"certified-operators-mskm2\" (UID: \"d0af35c0-0558-4342-b066-aadca093fb08\") " pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:14 crc kubenswrapper[4713]: I0126 16:04:14.580244 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c8c2\" (UniqueName: \"kubernetes.io/projected/d0af35c0-0558-4342-b066-aadca093fb08-kube-api-access-7c8c2\") pod \"certified-operators-mskm2\" (UID: \"d0af35c0-0558-4342-b066-aadca093fb08\") " pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:14 crc kubenswrapper[4713]: I0126 16:04:14.750140 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:15 crc kubenswrapper[4713]: I0126 16:04:15.485474 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mskm2"] Jan 26 16:04:16 crc kubenswrapper[4713]: I0126 16:04:16.230130 4713 generic.go:334] "Generic (PLEG): container finished" podID="d0af35c0-0558-4342-b066-aadca093fb08" containerID="1d6e8c5ac5061dc056fc325ab78e80d0aba2f2de8f902439e92b25a68a33c297" exitCode=0 Jan 26 16:04:16 crc kubenswrapper[4713]: I0126 16:04:16.230210 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mskm2" event={"ID":"d0af35c0-0558-4342-b066-aadca093fb08","Type":"ContainerDied","Data":"1d6e8c5ac5061dc056fc325ab78e80d0aba2f2de8f902439e92b25a68a33c297"} Jan 26 16:04:16 crc kubenswrapper[4713]: I0126 16:04:16.230627 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mskm2" event={"ID":"d0af35c0-0558-4342-b066-aadca093fb08","Type":"ContainerStarted","Data":"03cfaf994e4e12350b8221ff9ad631c8f34b7805485d0035ba6f08e66448b0e0"} Jan 26 16:04:16 crc kubenswrapper[4713]: I0126 16:04:16.803490 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:04:16 crc kubenswrapper[4713]: E0126 16:04:16.804019 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:04:17 crc kubenswrapper[4713]: I0126 16:04:17.240986 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mskm2" event={"ID":"d0af35c0-0558-4342-b066-aadca093fb08","Type":"ContainerStarted","Data":"535af0a0d23c56ea1bbe367884672bcec90bea7fa150cbd89ebf029f4696b1cb"} Jan 26 16:04:18 crc kubenswrapper[4713]: I0126 16:04:18.251823 4713 generic.go:334] "Generic (PLEG): container finished" podID="d0af35c0-0558-4342-b066-aadca093fb08" containerID="535af0a0d23c56ea1bbe367884672bcec90bea7fa150cbd89ebf029f4696b1cb" exitCode=0 Jan 26 16:04:18 crc kubenswrapper[4713]: I0126 16:04:18.251923 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mskm2" event={"ID":"d0af35c0-0558-4342-b066-aadca093fb08","Type":"ContainerDied","Data":"535af0a0d23c56ea1bbe367884672bcec90bea7fa150cbd89ebf029f4696b1cb"} Jan 26 16:04:19 crc kubenswrapper[4713]: I0126 16:04:19.265952 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mskm2" event={"ID":"d0af35c0-0558-4342-b066-aadca093fb08","Type":"ContainerStarted","Data":"f8c65237d8d3b3760bddf0491c5d39a599e24a536967e2b2a4c6c37ae71ddf3f"} Jan 26 16:04:19 crc kubenswrapper[4713]: I0126 16:04:19.287396 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mskm2" podStartSLOduration=2.877906008 podStartE2EDuration="5.287378885s" podCreationTimestamp="2026-01-26 16:04:14 +0000 UTC" firstStartedPulling="2026-01-26 16:04:16.232562487 +0000 UTC m=+1831.369579732" lastFinishedPulling="2026-01-26 16:04:18.642035364 +0000 UTC m=+1833.779052609" observedRunningTime="2026-01-26 16:04:19.285649866 +0000 UTC m=+1834.422667101" watchObservedRunningTime="2026-01-26 16:04:19.287378885 +0000 UTC m=+1834.424396120" Jan 26 16:04:24 crc kubenswrapper[4713]: I0126 16:04:24.750572 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:24 crc kubenswrapper[4713]: I0126 16:04:24.751127 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:24 crc kubenswrapper[4713]: I0126 16:04:24.795341 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:25 crc kubenswrapper[4713]: I0126 16:04:25.398980 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:25 crc kubenswrapper[4713]: I0126 16:04:25.458296 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mskm2"] Jan 26 16:04:27 crc kubenswrapper[4713]: I0126 16:04:27.345294 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mskm2" podUID="d0af35c0-0558-4342-b066-aadca093fb08" containerName="registry-server" containerID="cri-o://f8c65237d8d3b3760bddf0491c5d39a599e24a536967e2b2a4c6c37ae71ddf3f" gracePeriod=2 Jan 26 16:04:27 crc kubenswrapper[4713]: I0126 16:04:27.927029 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.084383 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0af35c0-0558-4342-b066-aadca093fb08-catalog-content\") pod \"d0af35c0-0558-4342-b066-aadca093fb08\" (UID: \"d0af35c0-0558-4342-b066-aadca093fb08\") " Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.084476 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c8c2\" (UniqueName: \"kubernetes.io/projected/d0af35c0-0558-4342-b066-aadca093fb08-kube-api-access-7c8c2\") pod \"d0af35c0-0558-4342-b066-aadca093fb08\" (UID: \"d0af35c0-0558-4342-b066-aadca093fb08\") " Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.084574 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0af35c0-0558-4342-b066-aadca093fb08-utilities\") pod \"d0af35c0-0558-4342-b066-aadca093fb08\" (UID: \"d0af35c0-0558-4342-b066-aadca093fb08\") " Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.085121 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0af35c0-0558-4342-b066-aadca093fb08-utilities" (OuterVolumeSpecName: "utilities") pod "d0af35c0-0558-4342-b066-aadca093fb08" (UID: "d0af35c0-0558-4342-b066-aadca093fb08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.085298 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0af35c0-0558-4342-b066-aadca093fb08-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.090587 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0af35c0-0558-4342-b066-aadca093fb08-kube-api-access-7c8c2" (OuterVolumeSpecName: "kube-api-access-7c8c2") pod "d0af35c0-0558-4342-b066-aadca093fb08" (UID: "d0af35c0-0558-4342-b066-aadca093fb08"). InnerVolumeSpecName "kube-api-access-7c8c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.136586 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0af35c0-0558-4342-b066-aadca093fb08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0af35c0-0558-4342-b066-aadca093fb08" (UID: "d0af35c0-0558-4342-b066-aadca093fb08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.187065 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0af35c0-0558-4342-b066-aadca093fb08-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.187114 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c8c2\" (UniqueName: \"kubernetes.io/projected/d0af35c0-0558-4342-b066-aadca093fb08-kube-api-access-7c8c2\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.358008 4713 generic.go:334] "Generic (PLEG): container finished" podID="d0af35c0-0558-4342-b066-aadca093fb08" containerID="f8c65237d8d3b3760bddf0491c5d39a599e24a536967e2b2a4c6c37ae71ddf3f" exitCode=0 Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.358074 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mskm2" event={"ID":"d0af35c0-0558-4342-b066-aadca093fb08","Type":"ContainerDied","Data":"f8c65237d8d3b3760bddf0491c5d39a599e24a536967e2b2a4c6c37ae71ddf3f"} Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.358138 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mskm2" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.359405 4713 scope.go:117] "RemoveContainer" containerID="f8c65237d8d3b3760bddf0491c5d39a599e24a536967e2b2a4c6c37ae71ddf3f" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.359329 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mskm2" event={"ID":"d0af35c0-0558-4342-b066-aadca093fb08","Type":"ContainerDied","Data":"03cfaf994e4e12350b8221ff9ad631c8f34b7805485d0035ba6f08e66448b0e0"} Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.399176 4713 scope.go:117] "RemoveContainer" containerID="535af0a0d23c56ea1bbe367884672bcec90bea7fa150cbd89ebf029f4696b1cb" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.405994 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mskm2"] Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.417922 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mskm2"] Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.436629 4713 scope.go:117] "RemoveContainer" containerID="1d6e8c5ac5061dc056fc325ab78e80d0aba2f2de8f902439e92b25a68a33c297" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.501047 4713 scope.go:117] "RemoveContainer" containerID="f8c65237d8d3b3760bddf0491c5d39a599e24a536967e2b2a4c6c37ae71ddf3f" Jan 26 16:04:28 crc kubenswrapper[4713]: E0126 16:04:28.501813 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8c65237d8d3b3760bddf0491c5d39a599e24a536967e2b2a4c6c37ae71ddf3f\": container with ID starting with f8c65237d8d3b3760bddf0491c5d39a599e24a536967e2b2a4c6c37ae71ddf3f not found: ID does not exist" containerID="f8c65237d8d3b3760bddf0491c5d39a599e24a536967e2b2a4c6c37ae71ddf3f" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.501850 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8c65237d8d3b3760bddf0491c5d39a599e24a536967e2b2a4c6c37ae71ddf3f"} err="failed to get container status \"f8c65237d8d3b3760bddf0491c5d39a599e24a536967e2b2a4c6c37ae71ddf3f\": rpc error: code = NotFound desc = could not find container \"f8c65237d8d3b3760bddf0491c5d39a599e24a536967e2b2a4c6c37ae71ddf3f\": container with ID starting with f8c65237d8d3b3760bddf0491c5d39a599e24a536967e2b2a4c6c37ae71ddf3f not found: ID does not exist" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.501872 4713 scope.go:117] "RemoveContainer" containerID="535af0a0d23c56ea1bbe367884672bcec90bea7fa150cbd89ebf029f4696b1cb" Jan 26 16:04:28 crc kubenswrapper[4713]: E0126 16:04:28.505755 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"535af0a0d23c56ea1bbe367884672bcec90bea7fa150cbd89ebf029f4696b1cb\": container with ID starting with 535af0a0d23c56ea1bbe367884672bcec90bea7fa150cbd89ebf029f4696b1cb not found: ID does not exist" containerID="535af0a0d23c56ea1bbe367884672bcec90bea7fa150cbd89ebf029f4696b1cb" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.505806 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535af0a0d23c56ea1bbe367884672bcec90bea7fa150cbd89ebf029f4696b1cb"} err="failed to get container status \"535af0a0d23c56ea1bbe367884672bcec90bea7fa150cbd89ebf029f4696b1cb\": rpc error: code = NotFound desc = could not find container \"535af0a0d23c56ea1bbe367884672bcec90bea7fa150cbd89ebf029f4696b1cb\": container with ID starting with 535af0a0d23c56ea1bbe367884672bcec90bea7fa150cbd89ebf029f4696b1cb not found: ID does not exist" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.505840 4713 scope.go:117] "RemoveContainer" containerID="1d6e8c5ac5061dc056fc325ab78e80d0aba2f2de8f902439e92b25a68a33c297" Jan 26 16:04:28 crc kubenswrapper[4713]: E0126 16:04:28.506291 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d6e8c5ac5061dc056fc325ab78e80d0aba2f2de8f902439e92b25a68a33c297\": container with ID starting with 1d6e8c5ac5061dc056fc325ab78e80d0aba2f2de8f902439e92b25a68a33c297 not found: ID does not exist" containerID="1d6e8c5ac5061dc056fc325ab78e80d0aba2f2de8f902439e92b25a68a33c297" Jan 26 16:04:28 crc kubenswrapper[4713]: I0126 16:04:28.506325 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d6e8c5ac5061dc056fc325ab78e80d0aba2f2de8f902439e92b25a68a33c297"} err="failed to get container status \"1d6e8c5ac5061dc056fc325ab78e80d0aba2f2de8f902439e92b25a68a33c297\": rpc error: code = NotFound desc = could not find container \"1d6e8c5ac5061dc056fc325ab78e80d0aba2f2de8f902439e92b25a68a33c297\": container with ID starting with 1d6e8c5ac5061dc056fc325ab78e80d0aba2f2de8f902439e92b25a68a33c297 not found: ID does not exist" Jan 26 16:04:29 crc kubenswrapper[4713]: I0126 16:04:29.823148 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0af35c0-0558-4342-b066-aadca093fb08" path="/var/lib/kubelet/pods/d0af35c0-0558-4342-b066-aadca093fb08/volumes" Jan 26 16:04:30 crc kubenswrapper[4713]: I0126 16:04:30.804173 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:04:30 crc kubenswrapper[4713]: E0126 16:04:30.804743 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:04:43 crc kubenswrapper[4713]: I0126 16:04:43.804030 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:04:43 crc kubenswrapper[4713]: E0126 16:04:43.804860 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:04:56 crc kubenswrapper[4713]: I0126 16:04:56.804182 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:04:56 crc kubenswrapper[4713]: E0126 16:04:56.805262 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:05:01 crc kubenswrapper[4713]: I0126 16:05:01.727892 4713 generic.go:334] "Generic (PLEG): container finished" podID="3221883d-48d9-4953-aeba-4969c3ea1ed9" containerID="813c48fd5a0684259f3d5c373e3e0e3dc0b5912ff4ac71bd7dc15ebb309db4eb" exitCode=0 Jan 26 16:05:01 crc kubenswrapper[4713]: I0126 16:05:01.727978 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" event={"ID":"3221883d-48d9-4953-aeba-4969c3ea1ed9","Type":"ContainerDied","Data":"813c48fd5a0684259f3d5c373e3e0e3dc0b5912ff4ac71bd7dc15ebb309db4eb"} Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.334282 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.460927 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-ssh-key-openstack-edpm-ipam\") pod \"3221883d-48d9-4953-aeba-4969c3ea1ed9\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.461308 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6ff4\" (UniqueName: \"kubernetes.io/projected/3221883d-48d9-4953-aeba-4969c3ea1ed9-kube-api-access-d6ff4\") pod \"3221883d-48d9-4953-aeba-4969c3ea1ed9\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.461447 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-bootstrap-combined-ca-bundle\") pod \"3221883d-48d9-4953-aeba-4969c3ea1ed9\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.461489 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-inventory\") pod \"3221883d-48d9-4953-aeba-4969c3ea1ed9\" (UID: \"3221883d-48d9-4953-aeba-4969c3ea1ed9\") " Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.468571 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3221883d-48d9-4953-aeba-4969c3ea1ed9-kube-api-access-d6ff4" (OuterVolumeSpecName: "kube-api-access-d6ff4") pod "3221883d-48d9-4953-aeba-4969c3ea1ed9" (UID: "3221883d-48d9-4953-aeba-4969c3ea1ed9"). InnerVolumeSpecName "kube-api-access-d6ff4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.469623 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3221883d-48d9-4953-aeba-4969c3ea1ed9" (UID: "3221883d-48d9-4953-aeba-4969c3ea1ed9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.489803 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-inventory" (OuterVolumeSpecName: "inventory") pod "3221883d-48d9-4953-aeba-4969c3ea1ed9" (UID: "3221883d-48d9-4953-aeba-4969c3ea1ed9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.518927 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3221883d-48d9-4953-aeba-4969c3ea1ed9" (UID: "3221883d-48d9-4953-aeba-4969c3ea1ed9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.564545 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.564607 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6ff4\" (UniqueName: \"kubernetes.io/projected/3221883d-48d9-4953-aeba-4969c3ea1ed9-kube-api-access-d6ff4\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.564628 4713 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.564647 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3221883d-48d9-4953-aeba-4969c3ea1ed9-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.760604 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" event={"ID":"3221883d-48d9-4953-aeba-4969c3ea1ed9","Type":"ContainerDied","Data":"abc38730400a507c610bdcb8d8c5bd0f55c222344319d309b451f337b365c5f9"} Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.760942 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abc38730400a507c610bdcb8d8c5bd0f55c222344319d309b451f337b365c5f9" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.760655 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.880977 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq"] Jan 26 16:05:03 crc kubenswrapper[4713]: E0126 16:05:03.881675 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3221883d-48d9-4953-aeba-4969c3ea1ed9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.881716 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="3221883d-48d9-4953-aeba-4969c3ea1ed9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 16:05:03 crc kubenswrapper[4713]: E0126 16:05:03.881740 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0af35c0-0558-4342-b066-aadca093fb08" containerName="registry-server" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.881750 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0af35c0-0558-4342-b066-aadca093fb08" containerName="registry-server" Jan 26 16:05:03 crc kubenswrapper[4713]: E0126 16:05:03.881771 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0af35c0-0558-4342-b066-aadca093fb08" containerName="extract-utilities" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.881780 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0af35c0-0558-4342-b066-aadca093fb08" containerName="extract-utilities" Jan 26 16:05:03 crc kubenswrapper[4713]: E0126 16:05:03.881805 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0af35c0-0558-4342-b066-aadca093fb08" containerName="extract-content" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.881813 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0af35c0-0558-4342-b066-aadca093fb08" containerName="extract-content" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.882094 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0af35c0-0558-4342-b066-aadca093fb08" containerName="registry-server" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.882130 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="3221883d-48d9-4953-aeba-4969c3ea1ed9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.883095 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.885849 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.886072 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.887222 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.888061 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.897179 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq"] Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.975071 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq\" (UID: \"a590086e-4f64-45f1-8bc9-b1772bd1d7b4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.975520 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq\" (UID: \"a590086e-4f64-45f1-8bc9-b1772bd1d7b4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" Jan 26 16:05:03 crc kubenswrapper[4713]: I0126 16:05:03.975586 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5j7f\" (UniqueName: \"kubernetes.io/projected/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-kube-api-access-f5j7f\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq\" (UID: \"a590086e-4f64-45f1-8bc9-b1772bd1d7b4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" Jan 26 16:05:04 crc kubenswrapper[4713]: I0126 16:05:04.078004 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq\" (UID: \"a590086e-4f64-45f1-8bc9-b1772bd1d7b4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" Jan 26 16:05:04 crc kubenswrapper[4713]: I0126 16:05:04.078307 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq\" (UID: \"a590086e-4f64-45f1-8bc9-b1772bd1d7b4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" Jan 26 16:05:04 crc kubenswrapper[4713]: I0126 16:05:04.078479 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5j7f\" (UniqueName: \"kubernetes.io/projected/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-kube-api-access-f5j7f\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq\" (UID: \"a590086e-4f64-45f1-8bc9-b1772bd1d7b4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" Jan 26 16:05:04 crc kubenswrapper[4713]: I0126 16:05:04.083568 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq\" (UID: \"a590086e-4f64-45f1-8bc9-b1772bd1d7b4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" Jan 26 16:05:04 crc kubenswrapper[4713]: I0126 16:05:04.087858 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq\" (UID: \"a590086e-4f64-45f1-8bc9-b1772bd1d7b4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" Jan 26 16:05:04 crc kubenswrapper[4713]: I0126 16:05:04.111272 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5j7f\" (UniqueName: \"kubernetes.io/projected/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-kube-api-access-f5j7f\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq\" (UID: \"a590086e-4f64-45f1-8bc9-b1772bd1d7b4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" Jan 26 16:05:04 crc kubenswrapper[4713]: I0126 16:05:04.203467 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" Jan 26 16:05:04 crc kubenswrapper[4713]: I0126 16:05:04.896591 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq"] Jan 26 16:05:04 crc kubenswrapper[4713]: W0126 16:05:04.898679 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda590086e_4f64_45f1_8bc9_b1772bd1d7b4.slice/crio-58c1e0af9d4f6286e40247318be317cd8da4b2bf2bfb44889c6b9660aa1e315a WatchSource:0}: Error finding container 58c1e0af9d4f6286e40247318be317cd8da4b2bf2bfb44889c6b9660aa1e315a: Status 404 returned error can't find the container with id 58c1e0af9d4f6286e40247318be317cd8da4b2bf2bfb44889c6b9660aa1e315a Jan 26 16:05:05 crc kubenswrapper[4713]: I0126 16:05:05.783179 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" event={"ID":"a590086e-4f64-45f1-8bc9-b1772bd1d7b4","Type":"ContainerStarted","Data":"199dbe5f71bb94d982d5a7a2a93129f2e42a823a954c040ecfa86a9071ae1825"} Jan 26 16:05:05 crc kubenswrapper[4713]: I0126 16:05:05.783540 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" event={"ID":"a590086e-4f64-45f1-8bc9-b1772bd1d7b4","Type":"ContainerStarted","Data":"58c1e0af9d4f6286e40247318be317cd8da4b2bf2bfb44889c6b9660aa1e315a"} Jan 26 16:05:05 crc kubenswrapper[4713]: I0126 16:05:05.807132 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" podStartSLOduration=2.390431553 podStartE2EDuration="2.80710724s" podCreationTimestamp="2026-01-26 16:05:03 +0000 UTC" firstStartedPulling="2026-01-26 16:05:04.906281762 +0000 UTC m=+1880.043299027" lastFinishedPulling="2026-01-26 16:05:05.322957479 +0000 UTC m=+1880.459974714" observedRunningTime="2026-01-26 16:05:05.800225613 +0000 UTC m=+1880.937242878" watchObservedRunningTime="2026-01-26 16:05:05.80710724 +0000 UTC m=+1880.944124495" Jan 26 16:05:07 crc kubenswrapper[4713]: I0126 16:05:07.360599 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-4dgsg"] Jan 26 16:05:07 crc kubenswrapper[4713]: I0126 16:05:07.395406 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-44wbp"] Jan 26 16:05:07 crc kubenswrapper[4713]: I0126 16:05:07.415942 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-4dgsg"] Jan 26 16:05:07 crc kubenswrapper[4713]: I0126 16:05:07.430825 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-44wbp"] Jan 26 16:05:07 crc kubenswrapper[4713]: I0126 16:05:07.817159 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24468eba-9d4f-446e-ac2d-39c4855686ff" path="/var/lib/kubelet/pods/24468eba-9d4f-446e-ac2d-39c4855686ff/volumes" Jan 26 16:05:07 crc kubenswrapper[4713]: I0126 16:05:07.818053 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e809e90-de29-4ad5-ad0f-8dc49a202b3f" path="/var/lib/kubelet/pods/4e809e90-de29-4ad5-ad0f-8dc49a202b3f/volumes" Jan 26 16:05:08 crc kubenswrapper[4713]: I0126 16:05:08.038299 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0f26-account-create-update-lpqwm"] Jan 26 16:05:08 crc kubenswrapper[4713]: I0126 16:05:08.051629 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a2f4-account-create-update-mlshf"] Jan 26 16:05:08 crc kubenswrapper[4713]: I0126 16:05:08.060900 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-glbp5"] Jan 26 16:05:08 crc kubenswrapper[4713]: I0126 16:05:08.071518 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e73f-account-create-update-kb58x"] Jan 26 16:05:08 crc kubenswrapper[4713]: I0126 16:05:08.079461 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-glbp5"] Jan 26 16:05:08 crc kubenswrapper[4713]: I0126 16:05:08.092187 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a2f4-account-create-update-mlshf"] Jan 26 16:05:08 crc kubenswrapper[4713]: I0126 16:05:08.101521 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0f26-account-create-update-lpqwm"] Jan 26 16:05:08 crc kubenswrapper[4713]: I0126 16:05:08.110080 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e73f-account-create-update-kb58x"] Jan 26 16:05:09 crc kubenswrapper[4713]: I0126 16:05:09.804862 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:05:09 crc kubenswrapper[4713]: E0126 16:05:09.805780 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:05:09 crc kubenswrapper[4713]: I0126 16:05:09.818245 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c39f53-9957-4f6a-912c-1c2217af11f1" path="/var/lib/kubelet/pods/10c39f53-9957-4f6a-912c-1c2217af11f1/volumes" Jan 26 16:05:09 crc kubenswrapper[4713]: I0126 16:05:09.819573 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9db8c9f9-5fba-4647-841a-71f4bc24f438" path="/var/lib/kubelet/pods/9db8c9f9-5fba-4647-841a-71f4bc24f438/volumes" Jan 26 16:05:09 crc kubenswrapper[4713]: I0126 16:05:09.820793 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6598bbb-908a-4758-91c0-e72d8a9d4da7" path="/var/lib/kubelet/pods/b6598bbb-908a-4758-91c0-e72d8a9d4da7/volumes" Jan 26 16:05:09 crc kubenswrapper[4713]: I0126 16:05:09.822127 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc12f2df-b90e-4bb9-a255-5e5353ed1dd5" path="/var/lib/kubelet/pods/fc12f2df-b90e-4bb9-a255-5e5353ed1dd5/volumes" Jan 26 16:05:16 crc kubenswrapper[4713]: I0126 16:05:16.045120 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-6826v"] Jan 26 16:05:16 crc kubenswrapper[4713]: I0126 16:05:16.055183 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-dvtkq"] Jan 26 16:05:16 crc kubenswrapper[4713]: I0126 16:05:16.064893 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-6826v"] Jan 26 16:05:16 crc kubenswrapper[4713]: I0126 16:05:16.073861 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-87vcz"] Jan 26 16:05:16 crc kubenswrapper[4713]: I0126 16:05:16.083613 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-dvtkq"] Jan 26 16:05:16 crc kubenswrapper[4713]: I0126 16:05:16.092061 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-87vcz"] Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.055517 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e0a5-account-create-update-rwzs7"] Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.072647 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-90d6-account-create-update-7jgpf"] Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.083209 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-5qpqg"] Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.109143 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e0a5-account-create-update-rwzs7"] Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.133599 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-bd27-account-create-update-584sq"] Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.145036 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-5qpqg"] Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.155560 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-90d6-account-create-update-7jgpf"] Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.163899 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-afae-account-create-update-f5xks"] Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.172013 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-bd27-account-create-update-584sq"] Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.180658 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-afae-account-create-update-f5xks"] Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.820135 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0880b6cd-9c82-432d-8ca2-e536c3f9a68f" path="/var/lib/kubelet/pods/0880b6cd-9c82-432d-8ca2-e536c3f9a68f/volumes" Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.820974 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5754eedb-9e1a-4f09-a0cd-9e16659b5708" path="/var/lib/kubelet/pods/5754eedb-9e1a-4f09-a0cd-9e16659b5708/volumes" Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.821701 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dc642ec-46c1-47a0-a022-3259e2d47d42" path="/var/lib/kubelet/pods/7dc642ec-46c1-47a0-a022-3259e2d47d42/volumes" Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.822327 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e78b6f7-7c44-4bb4-b9a9-b763d463466f" path="/var/lib/kubelet/pods/7e78b6f7-7c44-4bb4-b9a9-b763d463466f/volumes" Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.823435 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="864c7381-a1b5-4e9c-986a-9c7368508fd0" path="/var/lib/kubelet/pods/864c7381-a1b5-4e9c-986a-9c7368508fd0/volumes" Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.823962 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92ae0895-a996-466d-800c-14494b72c006" path="/var/lib/kubelet/pods/92ae0895-a996-466d-800c-14494b72c006/volumes" Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.824699 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b12849c0-1ce5-4551-acf4-75f9fdc74fed" path="/var/lib/kubelet/pods/b12849c0-1ce5-4551-acf4-75f9fdc74fed/volumes" Jan 26 16:05:17 crc kubenswrapper[4713]: I0126 16:05:17.825759 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2a98b39-d817-42a6-914f-529499cfc4bc" path="/var/lib/kubelet/pods/d2a98b39-d817-42a6-914f-529499cfc4bc/volumes" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.010115 4713 scope.go:117] "RemoveContainer" containerID="4cd83c03789f811761c6e86f06a0e75a66639c3562f95fa9f839c757fdf302b7" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.039341 4713 scope.go:117] "RemoveContainer" containerID="bf57827960b5474dc1e871957025838887f258c001d25a4642e797275e7d10ed" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.094205 4713 scope.go:117] "RemoveContainer" containerID="216fcaaf8eea7907695d61560a81b00283911dd81a4f6110338464f3e36cf2ef" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.167861 4713 scope.go:117] "RemoveContainer" containerID="cb89a9841538137dbb9da638ec59fdc923ee1f271770d546d7b1a46096d2068a" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.247600 4713 scope.go:117] "RemoveContainer" containerID="85e010915930eddbcf5ce2b55f5e30e16eba1b3225e8ad977bdf9fbdc8334d54" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.302344 4713 scope.go:117] "RemoveContainer" containerID="8d55a96d738dba8bfc0b9f82fda7d4b09378a72f724a427099407b82278e6945" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.350604 4713 scope.go:117] "RemoveContainer" containerID="f54ae1c1c93573cd3ad577a88c16f6df8280c05b1ccde3e9ca2bbd1a9147f148" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.373928 4713 scope.go:117] "RemoveContainer" containerID="8ef672a93d0746d48f47a20338f1e992ca8e2a57efdb609604bee25524c33d47" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.412340 4713 scope.go:117] "RemoveContainer" containerID="18e2efca262353feb30b01bab2ea527604ccca1bd75e4c7b9a97898a2555fcfa" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.433466 4713 scope.go:117] "RemoveContainer" containerID="5d18f621f5b0c9f8e399aa5ca23ef63a8992ab1457d582b892fa529aa822c1b4" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.461887 4713 scope.go:117] "RemoveContainer" containerID="1efbb0d0e89581fbc9606a982cfeacccdda4cec7466859356c3537cfa76646d2" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.488512 4713 scope.go:117] "RemoveContainer" containerID="5c61cbf5a85a4a22484384850c0b2975a3c272c2dbbb6a4875d0a18cccee1d81" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.520228 4713 scope.go:117] "RemoveContainer" containerID="9049dbb6073b4a02f2f242abdf7790a1e28f49d718b6da6067f920a91bd1f6dd" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.541700 4713 scope.go:117] "RemoveContainer" containerID="18d58f573953e226423c3188aa1d007ab34640ced5883d5de3bd364ffe84b26b" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.572128 4713 scope.go:117] "RemoveContainer" containerID="61b47ece138533de1d05d51fa484867c4c7a0c39e0c6680447e38400200fe2a7" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.594373 4713 scope.go:117] "RemoveContainer" containerID="2c0963094051b7349c365e4a6ebe649340386eb6d450e2063c80cade032387b7" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.622183 4713 scope.go:117] "RemoveContainer" containerID="2c005a844b05169d2ba38dc826ee25b60bd27ca2bb10d361260d66910d20268c" Jan 26 16:05:18 crc kubenswrapper[4713]: I0126 16:05:18.682690 4713 scope.go:117] "RemoveContainer" containerID="46768f605c2c316aa7a17408b7431ae8eb8dd67cee4e44fb1b5d9a26c2b99d97" Jan 26 16:05:20 crc kubenswrapper[4713]: I0126 16:05:20.803419 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:05:20 crc kubenswrapper[4713]: E0126 16:05:20.805232 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:05:31 crc kubenswrapper[4713]: I0126 16:05:31.804151 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:05:31 crc kubenswrapper[4713]: E0126 16:05:31.804971 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:05:33 crc kubenswrapper[4713]: I0126 16:05:33.041842 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-5m6jp"] Jan 26 16:05:33 crc kubenswrapper[4713]: I0126 16:05:33.050556 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-5m6jp"] Jan 26 16:05:33 crc kubenswrapper[4713]: I0126 16:05:33.825887 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f04dc7-c644-4c8a-ac31-721292a6874d" path="/var/lib/kubelet/pods/d1f04dc7-c644-4c8a-ac31-721292a6874d/volumes" Jan 26 16:05:45 crc kubenswrapper[4713]: I0126 16:05:45.819787 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:05:46 crc kubenswrapper[4713]: I0126 16:05:46.416970 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"d29a66018f48ff881be9f0565fb5f6910353f457cae6af63f01c0a4b486c8fb4"} Jan 26 16:06:11 crc kubenswrapper[4713]: I0126 16:06:11.073612 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9mb6k"] Jan 26 16:06:11 crc kubenswrapper[4713]: I0126 16:06:11.091768 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9mb6k"] Jan 26 16:06:11 crc kubenswrapper[4713]: I0126 16:06:11.821579 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e096468-d163-47e3-b23a-be3b1e15d844" path="/var/lib/kubelet/pods/4e096468-d163-47e3-b23a-be3b1e15d844/volumes" Jan 26 16:06:18 crc kubenswrapper[4713]: I0126 16:06:18.028301 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-btgqg"] Jan 26 16:06:18 crc kubenswrapper[4713]: I0126 16:06:18.045532 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-btgqg"] Jan 26 16:06:19 crc kubenswrapper[4713]: I0126 16:06:19.080406 4713 scope.go:117] "RemoveContainer" containerID="b3d95c2a686b2cf19fb37d4d134cd8c2b4059bacd4b7bd57d7c26b2b20f8d38c" Jan 26 16:06:19 crc kubenswrapper[4713]: I0126 16:06:19.148214 4713 scope.go:117] "RemoveContainer" containerID="5e9c06949e94b0e9ecd98a54170002f932093f73176c8a23cd3413f84fe164c3" Jan 26 16:06:19 crc kubenswrapper[4713]: I0126 16:06:19.818525 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c00cbec-fd99-4aee-b111-71c6a9d0cacc" path="/var/lib/kubelet/pods/2c00cbec-fd99-4aee-b111-71c6a9d0cacc/volumes" Jan 26 16:06:22 crc kubenswrapper[4713]: I0126 16:06:22.056636 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-9vf54"] Jan 26 16:06:22 crc kubenswrapper[4713]: I0126 16:06:22.070941 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-hhmsm"] Jan 26 16:06:22 crc kubenswrapper[4713]: I0126 16:06:22.084386 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-9vf54"] Jan 26 16:06:22 crc kubenswrapper[4713]: I0126 16:06:22.097924 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-hhmsm"] Jan 26 16:06:23 crc kubenswrapper[4713]: I0126 16:06:23.822728 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="483861ab-4f8a-485a-91f2-ad78944b7124" path="/var/lib/kubelet/pods/483861ab-4f8a-485a-91f2-ad78944b7124/volumes" Jan 26 16:06:23 crc kubenswrapper[4713]: I0126 16:06:23.825675 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="848ce8ac-5171-45ab-b1c0-737d4ba93663" path="/var/lib/kubelet/pods/848ce8ac-5171-45ab-b1c0-737d4ba93663/volumes" Jan 26 16:06:34 crc kubenswrapper[4713]: I0126 16:06:34.050057 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-h5gr4"] Jan 26 16:06:34 crc kubenswrapper[4713]: I0126 16:06:34.059546 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-h5gr4"] Jan 26 16:06:35 crc kubenswrapper[4713]: I0126 16:06:35.814830 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a3c9534-c956-4a61-a9fe-73026809a2bb" path="/var/lib/kubelet/pods/1a3c9534-c956-4a61-a9fe-73026809a2bb/volumes" Jan 26 16:06:49 crc kubenswrapper[4713]: I0126 16:06:49.051911 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-nkt8b"] Jan 26 16:06:49 crc kubenswrapper[4713]: I0126 16:06:49.069632 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-nkt8b"] Jan 26 16:06:49 crc kubenswrapper[4713]: I0126 16:06:49.814824 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8a35a5b-49a1-45aa-9090-2aab8a4893ce" path="/var/lib/kubelet/pods/c8a35a5b-49a1-45aa-9090-2aab8a4893ce/volumes" Jan 26 16:07:00 crc kubenswrapper[4713]: I0126 16:07:00.053759 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-xfq6j"] Jan 26 16:07:00 crc kubenswrapper[4713]: I0126 16:07:00.064054 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-xfq6j"] Jan 26 16:07:01 crc kubenswrapper[4713]: I0126 16:07:01.820745 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67bee733-1013-44d9-ac74-5ce552dbb606" path="/var/lib/kubelet/pods/67bee733-1013-44d9-ac74-5ce552dbb606/volumes" Jan 26 16:07:12 crc kubenswrapper[4713]: I0126 16:07:12.408462 4713 generic.go:334] "Generic (PLEG): container finished" podID="a590086e-4f64-45f1-8bc9-b1772bd1d7b4" containerID="199dbe5f71bb94d982d5a7a2a93129f2e42a823a954c040ecfa86a9071ae1825" exitCode=0 Jan 26 16:07:12 crc kubenswrapper[4713]: I0126 16:07:12.408621 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" event={"ID":"a590086e-4f64-45f1-8bc9-b1772bd1d7b4","Type":"ContainerDied","Data":"199dbe5f71bb94d982d5a7a2a93129f2e42a823a954c040ecfa86a9071ae1825"} Jan 26 16:07:13 crc kubenswrapper[4713]: I0126 16:07:13.919977 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.104716 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-ssh-key-openstack-edpm-ipam\") pod \"a590086e-4f64-45f1-8bc9-b1772bd1d7b4\" (UID: \"a590086e-4f64-45f1-8bc9-b1772bd1d7b4\") " Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.104859 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5j7f\" (UniqueName: \"kubernetes.io/projected/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-kube-api-access-f5j7f\") pod \"a590086e-4f64-45f1-8bc9-b1772bd1d7b4\" (UID: \"a590086e-4f64-45f1-8bc9-b1772bd1d7b4\") " Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.104887 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-inventory\") pod \"a590086e-4f64-45f1-8bc9-b1772bd1d7b4\" (UID: \"a590086e-4f64-45f1-8bc9-b1772bd1d7b4\") " Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.111437 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-kube-api-access-f5j7f" (OuterVolumeSpecName: "kube-api-access-f5j7f") pod "a590086e-4f64-45f1-8bc9-b1772bd1d7b4" (UID: "a590086e-4f64-45f1-8bc9-b1772bd1d7b4"). InnerVolumeSpecName "kube-api-access-f5j7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.134781 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-inventory" (OuterVolumeSpecName: "inventory") pod "a590086e-4f64-45f1-8bc9-b1772bd1d7b4" (UID: "a590086e-4f64-45f1-8bc9-b1772bd1d7b4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.152950 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a590086e-4f64-45f1-8bc9-b1772bd1d7b4" (UID: "a590086e-4f64-45f1-8bc9-b1772bd1d7b4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.207720 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.207752 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5j7f\" (UniqueName: \"kubernetes.io/projected/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-kube-api-access-f5j7f\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.207766 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a590086e-4f64-45f1-8bc9-b1772bd1d7b4-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.449561 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" event={"ID":"a590086e-4f64-45f1-8bc9-b1772bd1d7b4","Type":"ContainerDied","Data":"58c1e0af9d4f6286e40247318be317cd8da4b2bf2bfb44889c6b9660aa1e315a"} Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.449831 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58c1e0af9d4f6286e40247318be317cd8da4b2bf2bfb44889c6b9660aa1e315a" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.449922 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.539547 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87"] Jan 26 16:07:14 crc kubenswrapper[4713]: E0126 16:07:14.540089 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a590086e-4f64-45f1-8bc9-b1772bd1d7b4" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.540107 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="a590086e-4f64-45f1-8bc9-b1772bd1d7b4" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.540293 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="a590086e-4f64-45f1-8bc9-b1772bd1d7b4" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.541229 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.543421 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.543561 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.544222 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.545799 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.550720 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87"] Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.718139 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w86kd\" (UniqueName: \"kubernetes.io/projected/fd765284-f110-48b0-b7c7-0116b2f6a5e0-kube-api-access-w86kd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5nm87\" (UID: \"fd765284-f110-48b0-b7c7-0116b2f6a5e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.718320 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd765284-f110-48b0-b7c7-0116b2f6a5e0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5nm87\" (UID: \"fd765284-f110-48b0-b7c7-0116b2f6a5e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.718748 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd765284-f110-48b0-b7c7-0116b2f6a5e0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5nm87\" (UID: \"fd765284-f110-48b0-b7c7-0116b2f6a5e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.820996 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd765284-f110-48b0-b7c7-0116b2f6a5e0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5nm87\" (UID: \"fd765284-f110-48b0-b7c7-0116b2f6a5e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.821253 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w86kd\" (UniqueName: \"kubernetes.io/projected/fd765284-f110-48b0-b7c7-0116b2f6a5e0-kube-api-access-w86kd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5nm87\" (UID: \"fd765284-f110-48b0-b7c7-0116b2f6a5e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.821321 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd765284-f110-48b0-b7c7-0116b2f6a5e0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5nm87\" (UID: \"fd765284-f110-48b0-b7c7-0116b2f6a5e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.827051 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd765284-f110-48b0-b7c7-0116b2f6a5e0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5nm87\" (UID: \"fd765284-f110-48b0-b7c7-0116b2f6a5e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.827547 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd765284-f110-48b0-b7c7-0116b2f6a5e0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5nm87\" (UID: \"fd765284-f110-48b0-b7c7-0116b2f6a5e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.851212 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w86kd\" (UniqueName: \"kubernetes.io/projected/fd765284-f110-48b0-b7c7-0116b2f6a5e0-kube-api-access-w86kd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5nm87\" (UID: \"fd765284-f110-48b0-b7c7-0116b2f6a5e0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" Jan 26 16:07:14 crc kubenswrapper[4713]: I0126 16:07:14.869023 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" Jan 26 16:07:15 crc kubenswrapper[4713]: I0126 16:07:15.415329 4713 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:07:15 crc kubenswrapper[4713]: I0126 16:07:15.420605 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87"] Jan 26 16:07:15 crc kubenswrapper[4713]: I0126 16:07:15.459971 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" event={"ID":"fd765284-f110-48b0-b7c7-0116b2f6a5e0","Type":"ContainerStarted","Data":"0ece8bfae7077366a6ef6ac0bfe7771ce381b2b150aabad45653417a52c1e563"} Jan 26 16:07:17 crc kubenswrapper[4713]: I0126 16:07:17.486488 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" event={"ID":"fd765284-f110-48b0-b7c7-0116b2f6a5e0","Type":"ContainerStarted","Data":"615f3adbaebf2471855248fe3cbf6227b32e60791bb570f02337a21d36cb2047"} Jan 26 16:07:17 crc kubenswrapper[4713]: I0126 16:07:17.512037 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" podStartSLOduration=2.657464817 podStartE2EDuration="3.512015236s" podCreationTimestamp="2026-01-26 16:07:14 +0000 UTC" firstStartedPulling="2026-01-26 16:07:15.415056753 +0000 UTC m=+2010.552073988" lastFinishedPulling="2026-01-26 16:07:16.269607132 +0000 UTC m=+2011.406624407" observedRunningTime="2026-01-26 16:07:17.50376322 +0000 UTC m=+2012.640780485" watchObservedRunningTime="2026-01-26 16:07:17.512015236 +0000 UTC m=+2012.649032481" Jan 26 16:07:19 crc kubenswrapper[4713]: I0126 16:07:19.274557 4713 scope.go:117] "RemoveContainer" containerID="b1349437bc5f4a953a0930061706654f25b0937e4c332347c8ee999ace3d4f9c" Jan 26 16:07:19 crc kubenswrapper[4713]: I0126 16:07:19.301442 4713 scope.go:117] "RemoveContainer" containerID="0fcee5829eb3135a79b29eceeca390dc3037a5854b7384e2f583ec3bae7763fa" Jan 26 16:07:19 crc kubenswrapper[4713]: I0126 16:07:19.359911 4713 scope.go:117] "RemoveContainer" containerID="e3e75f97e36457d6181f8b3788e3bfea1ccdf8454baaa33071073ab40398cc16" Jan 26 16:07:19 crc kubenswrapper[4713]: I0126 16:07:19.418114 4713 scope.go:117] "RemoveContainer" containerID="0e96aca03a12ea97b933d81698aaa79cdb2240ef0ada34d64fd5f83bc6efeae4" Jan 26 16:07:19 crc kubenswrapper[4713]: I0126 16:07:19.455992 4713 scope.go:117] "RemoveContainer" containerID="e7498744938e3b926090ff7b4b1fe982879ec31e8947acdc0b852a42383e08ff" Jan 26 16:07:19 crc kubenswrapper[4713]: I0126 16:07:19.533886 4713 scope.go:117] "RemoveContainer" containerID="ed6ab4f817e2e10891a8f9cb34536e375e46daf715bdad954c2881a2c8dc5a84" Jan 26 16:07:34 crc kubenswrapper[4713]: I0126 16:07:34.049812 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-57ae-account-create-update-qv9q4"] Jan 26 16:07:34 crc kubenswrapper[4713]: I0126 16:07:34.073015 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-57ae-account-create-update-qv9q4"] Jan 26 16:07:35 crc kubenswrapper[4713]: I0126 16:07:35.047330 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-pdtm9"] Jan 26 16:07:35 crc kubenswrapper[4713]: I0126 16:07:35.059243 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-pdtm9"] Jan 26 16:07:35 crc kubenswrapper[4713]: I0126 16:07:35.846249 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4970397f-f884-40ad-bca5-6c272f27ab4f" path="/var/lib/kubelet/pods/4970397f-f884-40ad-bca5-6c272f27ab4f/volumes" Jan 26 16:07:35 crc kubenswrapper[4713]: I0126 16:07:35.848745 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd" path="/var/lib/kubelet/pods/a8d04bdd-c4c5-4f1c-aa2b-aa7d7a4832dd/volumes" Jan 26 16:07:36 crc kubenswrapper[4713]: I0126 16:07:36.033113 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-2mmsz"] Jan 26 16:07:36 crc kubenswrapper[4713]: I0126 16:07:36.045440 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3e94-account-create-update-qmnfk"] Jan 26 16:07:36 crc kubenswrapper[4713]: I0126 16:07:36.056575 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-bxdgl"] Jan 26 16:07:36 crc kubenswrapper[4713]: I0126 16:07:36.064186 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-19b6-account-create-update-h4s97"] Jan 26 16:07:36 crc kubenswrapper[4713]: I0126 16:07:36.071659 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-2mmsz"] Jan 26 16:07:36 crc kubenswrapper[4713]: I0126 16:07:36.079350 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3e94-account-create-update-qmnfk"] Jan 26 16:07:36 crc kubenswrapper[4713]: I0126 16:07:36.088422 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-19b6-account-create-update-h4s97"] Jan 26 16:07:36 crc kubenswrapper[4713]: I0126 16:07:36.096525 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-bxdgl"] Jan 26 16:07:37 crc kubenswrapper[4713]: I0126 16:07:37.818494 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d659c0-5c5b-461f-89d0-435b02bd409b" path="/var/lib/kubelet/pods/02d659c0-5c5b-461f-89d0-435b02bd409b/volumes" Jan 26 16:07:37 crc kubenswrapper[4713]: I0126 16:07:37.819349 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="832abf5e-06a6-4e5f-8d93-0e91eefdb0de" path="/var/lib/kubelet/pods/832abf5e-06a6-4e5f-8d93-0e91eefdb0de/volumes" Jan 26 16:07:37 crc kubenswrapper[4713]: I0126 16:07:37.819961 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97311fac-af74-45cb-ad3a-c7a67efaf219" path="/var/lib/kubelet/pods/97311fac-af74-45cb-ad3a-c7a67efaf219/volumes" Jan 26 16:07:37 crc kubenswrapper[4713]: I0126 16:07:37.820620 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc98b501-5b02-49f5-a1e3-2543e981eab8" path="/var/lib/kubelet/pods/dc98b501-5b02-49f5-a1e3-2543e981eab8/volumes" Jan 26 16:08:03 crc kubenswrapper[4713]: I0126 16:08:03.301194 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:08:03 crc kubenswrapper[4713]: I0126 16:08:03.301901 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:08:19 crc kubenswrapper[4713]: I0126 16:08:19.683533 4713 scope.go:117] "RemoveContainer" containerID="b47eb8872d6465f6b4d32e8f33295ef5509e59a6b7251dc02a22b96b3ddae660" Jan 26 16:08:19 crc kubenswrapper[4713]: I0126 16:08:19.714050 4713 scope.go:117] "RemoveContainer" containerID="6c2f95925bc58c01899034302149f7a0502fbf5d1913f568ba00ef5f70c4f32a" Jan 26 16:08:19 crc kubenswrapper[4713]: I0126 16:08:19.790631 4713 scope.go:117] "RemoveContainer" containerID="e6634f90c1c979cfac0111338f8a7212c4bc5a30e1489ef98f2f50ba8f364bc4" Jan 26 16:08:19 crc kubenswrapper[4713]: I0126 16:08:19.831845 4713 scope.go:117] "RemoveContainer" containerID="c825c0f3309e6cf9330a0b533d515fb9cc8f8c3f408053b7692eb620a3aa1ead" Jan 26 16:08:19 crc kubenswrapper[4713]: I0126 16:08:19.873999 4713 scope.go:117] "RemoveContainer" containerID="2eb584d2d5d062802061e0c73deb5ce4b51e1cef699c90eaf089cbb92206eaa7" Jan 26 16:08:19 crc kubenswrapper[4713]: I0126 16:08:19.931240 4713 scope.go:117] "RemoveContainer" containerID="1d3b8fe62f61d99d10256eb79b6a763af91cc62194b11ed4b3902401600ab3f0" Jan 26 16:08:24 crc kubenswrapper[4713]: I0126 16:08:24.634460 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ht69g"] Jan 26 16:08:24 crc kubenswrapper[4713]: I0126 16:08:24.638819 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:24 crc kubenswrapper[4713]: I0126 16:08:24.655496 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ht69g"] Jan 26 16:08:24 crc kubenswrapper[4713]: I0126 16:08:24.739404 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpftf\" (UniqueName: \"kubernetes.io/projected/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-kube-api-access-hpftf\") pod \"redhat-operators-ht69g\" (UID: \"8b8bca2b-2bb6-40b8-af2d-218b11e04cec\") " pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:24 crc kubenswrapper[4713]: I0126 16:08:24.739545 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-utilities\") pod \"redhat-operators-ht69g\" (UID: \"8b8bca2b-2bb6-40b8-af2d-218b11e04cec\") " pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:24 crc kubenswrapper[4713]: I0126 16:08:24.739607 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-catalog-content\") pod \"redhat-operators-ht69g\" (UID: \"8b8bca2b-2bb6-40b8-af2d-218b11e04cec\") " pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:24 crc kubenswrapper[4713]: I0126 16:08:24.841999 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-catalog-content\") pod \"redhat-operators-ht69g\" (UID: \"8b8bca2b-2bb6-40b8-af2d-218b11e04cec\") " pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:24 crc kubenswrapper[4713]: I0126 16:08:24.842180 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpftf\" (UniqueName: \"kubernetes.io/projected/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-kube-api-access-hpftf\") pod \"redhat-operators-ht69g\" (UID: \"8b8bca2b-2bb6-40b8-af2d-218b11e04cec\") " pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:24 crc kubenswrapper[4713]: I0126 16:08:24.842300 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-utilities\") pod \"redhat-operators-ht69g\" (UID: \"8b8bca2b-2bb6-40b8-af2d-218b11e04cec\") " pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:24 crc kubenswrapper[4713]: I0126 16:08:24.842571 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-catalog-content\") pod \"redhat-operators-ht69g\" (UID: \"8b8bca2b-2bb6-40b8-af2d-218b11e04cec\") " pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:24 crc kubenswrapper[4713]: I0126 16:08:24.842699 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-utilities\") pod \"redhat-operators-ht69g\" (UID: \"8b8bca2b-2bb6-40b8-af2d-218b11e04cec\") " pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:24 crc kubenswrapper[4713]: I0126 16:08:24.871495 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpftf\" (UniqueName: \"kubernetes.io/projected/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-kube-api-access-hpftf\") pod \"redhat-operators-ht69g\" (UID: \"8b8bca2b-2bb6-40b8-af2d-218b11e04cec\") " pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:24 crc kubenswrapper[4713]: I0126 16:08:24.967707 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:25 crc kubenswrapper[4713]: I0126 16:08:25.389626 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ht69g"] Jan 26 16:08:26 crc kubenswrapper[4713]: I0126 16:08:26.046023 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-254xv"] Jan 26 16:08:26 crc kubenswrapper[4713]: I0126 16:08:26.058442 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-254xv"] Jan 26 16:08:26 crc kubenswrapper[4713]: I0126 16:08:26.274698 4713 generic.go:334] "Generic (PLEG): container finished" podID="8b8bca2b-2bb6-40b8-af2d-218b11e04cec" containerID="c43b058287722ebe2b1d07fc73519faccf2b83764d53712d134643c5d8725718" exitCode=0 Jan 26 16:08:26 crc kubenswrapper[4713]: I0126 16:08:26.274737 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ht69g" event={"ID":"8b8bca2b-2bb6-40b8-af2d-218b11e04cec","Type":"ContainerDied","Data":"c43b058287722ebe2b1d07fc73519faccf2b83764d53712d134643c5d8725718"} Jan 26 16:08:26 crc kubenswrapper[4713]: I0126 16:08:26.274760 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ht69g" event={"ID":"8b8bca2b-2bb6-40b8-af2d-218b11e04cec","Type":"ContainerStarted","Data":"a6a95a928982649ec233eacf5e4a6dfa817248405189dfc00174c1fa52143a55"} Jan 26 16:08:27 crc kubenswrapper[4713]: I0126 16:08:27.815099 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edbeb019-b62a-41e2-8af4-63acbe2e0adb" path="/var/lib/kubelet/pods/edbeb019-b62a-41e2-8af4-63acbe2e0adb/volumes" Jan 26 16:08:28 crc kubenswrapper[4713]: I0126 16:08:28.298351 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ht69g" event={"ID":"8b8bca2b-2bb6-40b8-af2d-218b11e04cec","Type":"ContainerStarted","Data":"96665c32686bf2f2a095b2af6d4bc7690e088839fe3e9b29418fad393b9bef97"} Jan 26 16:08:33 crc kubenswrapper[4713]: I0126 16:08:33.302123 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:08:33 crc kubenswrapper[4713]: I0126 16:08:33.302918 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:08:34 crc kubenswrapper[4713]: I0126 16:08:34.382213 4713 generic.go:334] "Generic (PLEG): container finished" podID="fd765284-f110-48b0-b7c7-0116b2f6a5e0" containerID="615f3adbaebf2471855248fe3cbf6227b32e60791bb570f02337a21d36cb2047" exitCode=0 Jan 26 16:08:34 crc kubenswrapper[4713]: I0126 16:08:34.382296 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" event={"ID":"fd765284-f110-48b0-b7c7-0116b2f6a5e0","Type":"ContainerDied","Data":"615f3adbaebf2471855248fe3cbf6227b32e60791bb570f02337a21d36cb2047"} Jan 26 16:08:34 crc kubenswrapper[4713]: I0126 16:08:34.384796 4713 generic.go:334] "Generic (PLEG): container finished" podID="8b8bca2b-2bb6-40b8-af2d-218b11e04cec" containerID="96665c32686bf2f2a095b2af6d4bc7690e088839fe3e9b29418fad393b9bef97" exitCode=0 Jan 26 16:08:34 crc kubenswrapper[4713]: I0126 16:08:34.384848 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ht69g" event={"ID":"8b8bca2b-2bb6-40b8-af2d-218b11e04cec","Type":"ContainerDied","Data":"96665c32686bf2f2a095b2af6d4bc7690e088839fe3e9b29418fad393b9bef97"} Jan 26 16:08:35 crc kubenswrapper[4713]: I0126 16:08:35.395494 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ht69g" event={"ID":"8b8bca2b-2bb6-40b8-af2d-218b11e04cec","Type":"ContainerStarted","Data":"3900e1c8b0a2c607d25584fb0f5ae7f169e951fb4b0373d2275275e5df3296ae"} Jan 26 16:08:35 crc kubenswrapper[4713]: I0126 16:08:35.418543 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ht69g" podStartSLOduration=2.92052474 podStartE2EDuration="11.418525532s" podCreationTimestamp="2026-01-26 16:08:24 +0000 UTC" firstStartedPulling="2026-01-26 16:08:26.275893724 +0000 UTC m=+2081.412910949" lastFinishedPulling="2026-01-26 16:08:34.773894506 +0000 UTC m=+2089.910911741" observedRunningTime="2026-01-26 16:08:35.415250218 +0000 UTC m=+2090.552267463" watchObservedRunningTime="2026-01-26 16:08:35.418525532 +0000 UTC m=+2090.555542777" Jan 26 16:08:35 crc kubenswrapper[4713]: I0126 16:08:35.917163 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.057295 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd765284-f110-48b0-b7c7-0116b2f6a5e0-ssh-key-openstack-edpm-ipam\") pod \"fd765284-f110-48b0-b7c7-0116b2f6a5e0\" (UID: \"fd765284-f110-48b0-b7c7-0116b2f6a5e0\") " Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.057335 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd765284-f110-48b0-b7c7-0116b2f6a5e0-inventory\") pod \"fd765284-f110-48b0-b7c7-0116b2f6a5e0\" (UID: \"fd765284-f110-48b0-b7c7-0116b2f6a5e0\") " Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.057479 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w86kd\" (UniqueName: \"kubernetes.io/projected/fd765284-f110-48b0-b7c7-0116b2f6a5e0-kube-api-access-w86kd\") pod \"fd765284-f110-48b0-b7c7-0116b2f6a5e0\" (UID: \"fd765284-f110-48b0-b7c7-0116b2f6a5e0\") " Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.064957 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd765284-f110-48b0-b7c7-0116b2f6a5e0-kube-api-access-w86kd" (OuterVolumeSpecName: "kube-api-access-w86kd") pod "fd765284-f110-48b0-b7c7-0116b2f6a5e0" (UID: "fd765284-f110-48b0-b7c7-0116b2f6a5e0"). InnerVolumeSpecName "kube-api-access-w86kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.091482 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd765284-f110-48b0-b7c7-0116b2f6a5e0-inventory" (OuterVolumeSpecName: "inventory") pod "fd765284-f110-48b0-b7c7-0116b2f6a5e0" (UID: "fd765284-f110-48b0-b7c7-0116b2f6a5e0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.097330 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd765284-f110-48b0-b7c7-0116b2f6a5e0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fd765284-f110-48b0-b7c7-0116b2f6a5e0" (UID: "fd765284-f110-48b0-b7c7-0116b2f6a5e0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.160466 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd765284-f110-48b0-b7c7-0116b2f6a5e0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.160499 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd765284-f110-48b0-b7c7-0116b2f6a5e0-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.160509 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w86kd\" (UniqueName: \"kubernetes.io/projected/fd765284-f110-48b0-b7c7-0116b2f6a5e0-kube-api-access-w86kd\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.407258 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" event={"ID":"fd765284-f110-48b0-b7c7-0116b2f6a5e0","Type":"ContainerDied","Data":"0ece8bfae7077366a6ef6ac0bfe7771ce381b2b150aabad45653417a52c1e563"} Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.407318 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ece8bfae7077366a6ef6ac0bfe7771ce381b2b150aabad45653417a52c1e563" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.407423 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5nm87" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.525628 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78"] Jan 26 16:08:36 crc kubenswrapper[4713]: E0126 16:08:36.526071 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd765284-f110-48b0-b7c7-0116b2f6a5e0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.526089 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd765284-f110-48b0-b7c7-0116b2f6a5e0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.526314 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd765284-f110-48b0-b7c7-0116b2f6a5e0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.527082 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.532735 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.533538 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.533754 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.533899 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.574164 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78"] Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.674760 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1393b971-2819-450f-a44b-978658f849e5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-78h78\" (UID: \"1393b971-2819-450f-a44b-978658f849e5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.674822 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlp7x\" (UniqueName: \"kubernetes.io/projected/1393b971-2819-450f-a44b-978658f849e5-kube-api-access-dlp7x\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-78h78\" (UID: \"1393b971-2819-450f-a44b-978658f849e5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.675185 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1393b971-2819-450f-a44b-978658f849e5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-78h78\" (UID: \"1393b971-2819-450f-a44b-978658f849e5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.777407 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1393b971-2819-450f-a44b-978658f849e5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-78h78\" (UID: \"1393b971-2819-450f-a44b-978658f849e5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.777466 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlp7x\" (UniqueName: \"kubernetes.io/projected/1393b971-2819-450f-a44b-978658f849e5-kube-api-access-dlp7x\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-78h78\" (UID: \"1393b971-2819-450f-a44b-978658f849e5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.777614 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1393b971-2819-450f-a44b-978658f849e5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-78h78\" (UID: \"1393b971-2819-450f-a44b-978658f849e5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.781425 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1393b971-2819-450f-a44b-978658f849e5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-78h78\" (UID: \"1393b971-2819-450f-a44b-978658f849e5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.784299 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1393b971-2819-450f-a44b-978658f849e5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-78h78\" (UID: \"1393b971-2819-450f-a44b-978658f849e5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.801041 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlp7x\" (UniqueName: \"kubernetes.io/projected/1393b971-2819-450f-a44b-978658f849e5-kube-api-access-dlp7x\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-78h78\" (UID: \"1393b971-2819-450f-a44b-978658f849e5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" Jan 26 16:08:36 crc kubenswrapper[4713]: I0126 16:08:36.860154 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" Jan 26 16:08:37 crc kubenswrapper[4713]: W0126 16:08:37.434907 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1393b971_2819_450f_a44b_978658f849e5.slice/crio-2485b695520b361eb4e112cc6f476cf058e00bedeaf3e4931dc203a4d6eb3501 WatchSource:0}: Error finding container 2485b695520b361eb4e112cc6f476cf058e00bedeaf3e4931dc203a4d6eb3501: Status 404 returned error can't find the container with id 2485b695520b361eb4e112cc6f476cf058e00bedeaf3e4931dc203a4d6eb3501 Jan 26 16:08:37 crc kubenswrapper[4713]: I0126 16:08:37.438752 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78"] Jan 26 16:08:38 crc kubenswrapper[4713]: I0126 16:08:38.428665 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" event={"ID":"1393b971-2819-450f-a44b-978658f849e5","Type":"ContainerStarted","Data":"fc71e32bfe4ce2fa25c15fca8740c20a898f6a3ce2b4d05c43dd3a89bcfe4527"} Jan 26 16:08:38 crc kubenswrapper[4713]: I0126 16:08:38.429011 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" event={"ID":"1393b971-2819-450f-a44b-978658f849e5","Type":"ContainerStarted","Data":"2485b695520b361eb4e112cc6f476cf058e00bedeaf3e4931dc203a4d6eb3501"} Jan 26 16:08:38 crc kubenswrapper[4713]: I0126 16:08:38.453481 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" podStartSLOduration=1.9747236639999999 podStartE2EDuration="2.453459301s" podCreationTimestamp="2026-01-26 16:08:36 +0000 UTC" firstStartedPulling="2026-01-26 16:08:37.438065297 +0000 UTC m=+2092.575082532" lastFinishedPulling="2026-01-26 16:08:37.916800894 +0000 UTC m=+2093.053818169" observedRunningTime="2026-01-26 16:08:38.445124753 +0000 UTC m=+2093.582141988" watchObservedRunningTime="2026-01-26 16:08:38.453459301 +0000 UTC m=+2093.590476536" Jan 26 16:08:43 crc kubenswrapper[4713]: I0126 16:08:43.482305 4713 generic.go:334] "Generic (PLEG): container finished" podID="1393b971-2819-450f-a44b-978658f849e5" containerID="fc71e32bfe4ce2fa25c15fca8740c20a898f6a3ce2b4d05c43dd3a89bcfe4527" exitCode=0 Jan 26 16:08:43 crc kubenswrapper[4713]: I0126 16:08:43.482419 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" event={"ID":"1393b971-2819-450f-a44b-978658f849e5","Type":"ContainerDied","Data":"fc71e32bfe4ce2fa25c15fca8740c20a898f6a3ce2b4d05c43dd3a89bcfe4527"} Jan 26 16:08:44 crc kubenswrapper[4713]: I0126 16:08:44.967914 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:44 crc kubenswrapper[4713]: I0126 16:08:44.968148 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.022255 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.024908 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.169852 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1393b971-2819-450f-a44b-978658f849e5-inventory\") pod \"1393b971-2819-450f-a44b-978658f849e5\" (UID: \"1393b971-2819-450f-a44b-978658f849e5\") " Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.170871 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1393b971-2819-450f-a44b-978658f849e5-ssh-key-openstack-edpm-ipam\") pod \"1393b971-2819-450f-a44b-978658f849e5\" (UID: \"1393b971-2819-450f-a44b-978658f849e5\") " Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.171101 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlp7x\" (UniqueName: \"kubernetes.io/projected/1393b971-2819-450f-a44b-978658f849e5-kube-api-access-dlp7x\") pod \"1393b971-2819-450f-a44b-978658f849e5\" (UID: \"1393b971-2819-450f-a44b-978658f849e5\") " Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.186691 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1393b971-2819-450f-a44b-978658f849e5-kube-api-access-dlp7x" (OuterVolumeSpecName: "kube-api-access-dlp7x") pod "1393b971-2819-450f-a44b-978658f849e5" (UID: "1393b971-2819-450f-a44b-978658f849e5"). InnerVolumeSpecName "kube-api-access-dlp7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.208808 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1393b971-2819-450f-a44b-978658f849e5-inventory" (OuterVolumeSpecName: "inventory") pod "1393b971-2819-450f-a44b-978658f849e5" (UID: "1393b971-2819-450f-a44b-978658f849e5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.210999 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1393b971-2819-450f-a44b-978658f849e5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1393b971-2819-450f-a44b-978658f849e5" (UID: "1393b971-2819-450f-a44b-978658f849e5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.273505 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1393b971-2819-450f-a44b-978658f849e5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.273537 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlp7x\" (UniqueName: \"kubernetes.io/projected/1393b971-2819-450f-a44b-978658f849e5-kube-api-access-dlp7x\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.273547 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1393b971-2819-450f-a44b-978658f849e5-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.521403 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.521593 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-78h78" event={"ID":"1393b971-2819-450f-a44b-978658f849e5","Type":"ContainerDied","Data":"2485b695520b361eb4e112cc6f476cf058e00bedeaf3e4931dc203a4d6eb3501"} Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.521657 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2485b695520b361eb4e112cc6f476cf058e00bedeaf3e4931dc203a4d6eb3501" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.593732 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.649077 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g"] Jan 26 16:08:45 crc kubenswrapper[4713]: E0126 16:08:45.649605 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1393b971-2819-450f-a44b-978658f849e5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.649627 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="1393b971-2819-450f-a44b-978658f849e5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.649905 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="1393b971-2819-450f-a44b-978658f849e5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.650846 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.653321 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.654010 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.655004 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.657242 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.665595 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g"] Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.674355 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ht69g"] Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.784864 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3327a99-89b1-4901-b833-6c6c915839cb-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rw26g\" (UID: \"c3327a99-89b1-4901-b833-6c6c915839cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.785142 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6btg\" (UniqueName: \"kubernetes.io/projected/c3327a99-89b1-4901-b833-6c6c915839cb-kube-api-access-m6btg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rw26g\" (UID: \"c3327a99-89b1-4901-b833-6c6c915839cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.785265 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3327a99-89b1-4901-b833-6c6c915839cb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rw26g\" (UID: \"c3327a99-89b1-4901-b833-6c6c915839cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.886816 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6btg\" (UniqueName: \"kubernetes.io/projected/c3327a99-89b1-4901-b833-6c6c915839cb-kube-api-access-m6btg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rw26g\" (UID: \"c3327a99-89b1-4901-b833-6c6c915839cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.886967 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3327a99-89b1-4901-b833-6c6c915839cb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rw26g\" (UID: \"c3327a99-89b1-4901-b833-6c6c915839cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.887022 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3327a99-89b1-4901-b833-6c6c915839cb-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rw26g\" (UID: \"c3327a99-89b1-4901-b833-6c6c915839cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.891005 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3327a99-89b1-4901-b833-6c6c915839cb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rw26g\" (UID: \"c3327a99-89b1-4901-b833-6c6c915839cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.892953 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3327a99-89b1-4901-b833-6c6c915839cb-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rw26g\" (UID: \"c3327a99-89b1-4901-b833-6c6c915839cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.904247 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6btg\" (UniqueName: \"kubernetes.io/projected/c3327a99-89b1-4901-b833-6c6c915839cb-kube-api-access-m6btg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rw26g\" (UID: \"c3327a99-89b1-4901-b833-6c6c915839cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" Jan 26 16:08:45 crc kubenswrapper[4713]: I0126 16:08:45.988783 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" Jan 26 16:08:46 crc kubenswrapper[4713]: I0126 16:08:46.536287 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g"] Jan 26 16:08:47 crc kubenswrapper[4713]: I0126 16:08:47.562618 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" event={"ID":"c3327a99-89b1-4901-b833-6c6c915839cb","Type":"ContainerStarted","Data":"41362a98d94bdf382d4eb5cafb6e82c42a14a579a3b9679cbaabec1a48cece4c"} Jan 26 16:08:47 crc kubenswrapper[4713]: I0126 16:08:47.562886 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ht69g" podUID="8b8bca2b-2bb6-40b8-af2d-218b11e04cec" containerName="registry-server" containerID="cri-o://3900e1c8b0a2c607d25584fb0f5ae7f169e951fb4b0373d2275275e5df3296ae" gracePeriod=2 Jan 26 16:08:48 crc kubenswrapper[4713]: I0126 16:08:48.573242 4713 generic.go:334] "Generic (PLEG): container finished" podID="8b8bca2b-2bb6-40b8-af2d-218b11e04cec" containerID="3900e1c8b0a2c607d25584fb0f5ae7f169e951fb4b0373d2275275e5df3296ae" exitCode=0 Jan 26 16:08:48 crc kubenswrapper[4713]: I0126 16:08:48.573415 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ht69g" event={"ID":"8b8bca2b-2bb6-40b8-af2d-218b11e04cec","Type":"ContainerDied","Data":"3900e1c8b0a2c607d25584fb0f5ae7f169e951fb4b0373d2275275e5df3296ae"} Jan 26 16:08:48 crc kubenswrapper[4713]: I0126 16:08:48.834196 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.051687 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.160196 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpftf\" (UniqueName: \"kubernetes.io/projected/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-kube-api-access-hpftf\") pod \"8b8bca2b-2bb6-40b8-af2d-218b11e04cec\" (UID: \"8b8bca2b-2bb6-40b8-af2d-218b11e04cec\") " Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.160264 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-catalog-content\") pod \"8b8bca2b-2bb6-40b8-af2d-218b11e04cec\" (UID: \"8b8bca2b-2bb6-40b8-af2d-218b11e04cec\") " Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.160336 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-utilities\") pod \"8b8bca2b-2bb6-40b8-af2d-218b11e04cec\" (UID: \"8b8bca2b-2bb6-40b8-af2d-218b11e04cec\") " Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.162480 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-utilities" (OuterVolumeSpecName: "utilities") pod "8b8bca2b-2bb6-40b8-af2d-218b11e04cec" (UID: "8b8bca2b-2bb6-40b8-af2d-218b11e04cec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.165316 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-kube-api-access-hpftf" (OuterVolumeSpecName: "kube-api-access-hpftf") pod "8b8bca2b-2bb6-40b8-af2d-218b11e04cec" (UID: "8b8bca2b-2bb6-40b8-af2d-218b11e04cec"). InnerVolumeSpecName "kube-api-access-hpftf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.262877 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.262913 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpftf\" (UniqueName: \"kubernetes.io/projected/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-kube-api-access-hpftf\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.286432 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b8bca2b-2bb6-40b8-af2d-218b11e04cec" (UID: "8b8bca2b-2bb6-40b8-af2d-218b11e04cec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.365453 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b8bca2b-2bb6-40b8-af2d-218b11e04cec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.586303 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" event={"ID":"c3327a99-89b1-4901-b833-6c6c915839cb","Type":"ContainerStarted","Data":"52d9b967e5e67358e24d55b99adc180dc97a57181649b3e066869b7bf2d7dc34"} Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.589674 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ht69g" event={"ID":"8b8bca2b-2bb6-40b8-af2d-218b11e04cec","Type":"ContainerDied","Data":"a6a95a928982649ec233eacf5e4a6dfa817248405189dfc00174c1fa52143a55"} Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.589707 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ht69g" Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.589763 4713 scope.go:117] "RemoveContainer" containerID="3900e1c8b0a2c607d25584fb0f5ae7f169e951fb4b0373d2275275e5df3296ae" Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.616009 4713 scope.go:117] "RemoveContainer" containerID="96665c32686bf2f2a095b2af6d4bc7690e088839fe3e9b29418fad393b9bef97" Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.617553 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" podStartSLOduration=2.332494113 podStartE2EDuration="4.617534558s" podCreationTimestamp="2026-01-26 16:08:45 +0000 UTC" firstStartedPulling="2026-01-26 16:08:46.545604583 +0000 UTC m=+2101.682621838" lastFinishedPulling="2026-01-26 16:08:48.830645048 +0000 UTC m=+2103.967662283" observedRunningTime="2026-01-26 16:08:49.610803586 +0000 UTC m=+2104.747820831" watchObservedRunningTime="2026-01-26 16:08:49.617534558 +0000 UTC m=+2104.754551793" Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.650717 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ht69g"] Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.657320 4713 scope.go:117] "RemoveContainer" containerID="c43b058287722ebe2b1d07fc73519faccf2b83764d53712d134643c5d8725718" Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.664962 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ht69g"] Jan 26 16:08:49 crc kubenswrapper[4713]: I0126 16:08:49.817771 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8bca2b-2bb6-40b8-af2d-218b11e04cec" path="/var/lib/kubelet/pods/8b8bca2b-2bb6-40b8-af2d-218b11e04cec/volumes" Jan 26 16:08:53 crc kubenswrapper[4713]: I0126 16:08:53.054735 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-xjrxt"] Jan 26 16:08:53 crc kubenswrapper[4713]: I0126 16:08:53.067352 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-xjrxt"] Jan 26 16:08:53 crc kubenswrapper[4713]: I0126 16:08:53.816210 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1ce4cf0-e8a1-4475-a238-667b42cb429b" path="/var/lib/kubelet/pods/c1ce4cf0-e8a1-4475-a238-667b42cb429b/volumes" Jan 26 16:08:55 crc kubenswrapper[4713]: I0126 16:08:55.051317 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7xzpc"] Jan 26 16:08:55 crc kubenswrapper[4713]: I0126 16:08:55.062926 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7xzpc"] Jan 26 16:08:55 crc kubenswrapper[4713]: I0126 16:08:55.826013 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1ab2adc-59f5-4803-b758-0a88857830b0" path="/var/lib/kubelet/pods/c1ab2adc-59f5-4803-b758-0a88857830b0/volumes" Jan 26 16:09:03 crc kubenswrapper[4713]: I0126 16:09:03.302000 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:09:03 crc kubenswrapper[4713]: I0126 16:09:03.302630 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:09:03 crc kubenswrapper[4713]: I0126 16:09:03.302701 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 16:09:03 crc kubenswrapper[4713]: I0126 16:09:03.303819 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d29a66018f48ff881be9f0565fb5f6910353f457cae6af63f01c0a4b486c8fb4"} pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:09:03 crc kubenswrapper[4713]: I0126 16:09:03.303917 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" containerID="cri-o://d29a66018f48ff881be9f0565fb5f6910353f457cae6af63f01c0a4b486c8fb4" gracePeriod=600 Jan 26 16:09:03 crc kubenswrapper[4713]: I0126 16:09:03.731599 4713 generic.go:334] "Generic (PLEG): container finished" podID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerID="d29a66018f48ff881be9f0565fb5f6910353f457cae6af63f01c0a4b486c8fb4" exitCode=0 Jan 26 16:09:03 crc kubenswrapper[4713]: I0126 16:09:03.731642 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerDied","Data":"d29a66018f48ff881be9f0565fb5f6910353f457cae6af63f01c0a4b486c8fb4"} Jan 26 16:09:03 crc kubenswrapper[4713]: I0126 16:09:03.731895 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761"} Jan 26 16:09:03 crc kubenswrapper[4713]: I0126 16:09:03.731920 4713 scope.go:117] "RemoveContainer" containerID="d1300a4dca87b91252b08b250d6dca8a76952889ace634dd5d8e9ac2c4663f39" Jan 26 16:09:20 crc kubenswrapper[4713]: I0126 16:09:20.078874 4713 scope.go:117] "RemoveContainer" containerID="e0c6717ca6930d27a1658f2054f3e7c3aee59140d219f0eccb6d2fc62eff4904" Jan 26 16:09:20 crc kubenswrapper[4713]: I0126 16:09:20.133112 4713 scope.go:117] "RemoveContainer" containerID="e7f8ca8d156e794fc997d7c9ecfe0aafb1105b85b44dfe366a08b726a95018ef" Jan 26 16:09:20 crc kubenswrapper[4713]: I0126 16:09:20.213102 4713 scope.go:117] "RemoveContainer" containerID="bbf90d2cda31f005a60965d0273df86e11b2a78d6391634f41f11752905622b8" Jan 26 16:09:29 crc kubenswrapper[4713]: I0126 16:09:29.017097 4713 generic.go:334] "Generic (PLEG): container finished" podID="c3327a99-89b1-4901-b833-6c6c915839cb" containerID="52d9b967e5e67358e24d55b99adc180dc97a57181649b3e066869b7bf2d7dc34" exitCode=0 Jan 26 16:09:29 crc kubenswrapper[4713]: I0126 16:09:29.017626 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" event={"ID":"c3327a99-89b1-4901-b833-6c6c915839cb","Type":"ContainerDied","Data":"52d9b967e5e67358e24d55b99adc180dc97a57181649b3e066869b7bf2d7dc34"} Jan 26 16:09:30 crc kubenswrapper[4713]: I0126 16:09:30.589137 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" Jan 26 16:09:30 crc kubenswrapper[4713]: I0126 16:09:30.728911 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3327a99-89b1-4901-b833-6c6c915839cb-ssh-key-openstack-edpm-ipam\") pod \"c3327a99-89b1-4901-b833-6c6c915839cb\" (UID: \"c3327a99-89b1-4901-b833-6c6c915839cb\") " Jan 26 16:09:30 crc kubenswrapper[4713]: I0126 16:09:30.729203 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6btg\" (UniqueName: \"kubernetes.io/projected/c3327a99-89b1-4901-b833-6c6c915839cb-kube-api-access-m6btg\") pod \"c3327a99-89b1-4901-b833-6c6c915839cb\" (UID: \"c3327a99-89b1-4901-b833-6c6c915839cb\") " Jan 26 16:09:30 crc kubenswrapper[4713]: I0126 16:09:30.729246 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3327a99-89b1-4901-b833-6c6c915839cb-inventory\") pod \"c3327a99-89b1-4901-b833-6c6c915839cb\" (UID: \"c3327a99-89b1-4901-b833-6c6c915839cb\") " Jan 26 16:09:30 crc kubenswrapper[4713]: I0126 16:09:30.735623 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3327a99-89b1-4901-b833-6c6c915839cb-kube-api-access-m6btg" (OuterVolumeSpecName: "kube-api-access-m6btg") pod "c3327a99-89b1-4901-b833-6c6c915839cb" (UID: "c3327a99-89b1-4901-b833-6c6c915839cb"). InnerVolumeSpecName "kube-api-access-m6btg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:09:30 crc kubenswrapper[4713]: I0126 16:09:30.755844 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3327a99-89b1-4901-b833-6c6c915839cb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c3327a99-89b1-4901-b833-6c6c915839cb" (UID: "c3327a99-89b1-4901-b833-6c6c915839cb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:09:30 crc kubenswrapper[4713]: I0126 16:09:30.758408 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3327a99-89b1-4901-b833-6c6c915839cb-inventory" (OuterVolumeSpecName: "inventory") pod "c3327a99-89b1-4901-b833-6c6c915839cb" (UID: "c3327a99-89b1-4901-b833-6c6c915839cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:09:30 crc kubenswrapper[4713]: I0126 16:09:30.832904 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3327a99-89b1-4901-b833-6c6c915839cb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:30 crc kubenswrapper[4713]: I0126 16:09:30.833327 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6btg\" (UniqueName: \"kubernetes.io/projected/c3327a99-89b1-4901-b833-6c6c915839cb-kube-api-access-m6btg\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:30 crc kubenswrapper[4713]: I0126 16:09:30.833342 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3327a99-89b1-4901-b833-6c6c915839cb-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.040701 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" event={"ID":"c3327a99-89b1-4901-b833-6c6c915839cb","Type":"ContainerDied","Data":"41362a98d94bdf382d4eb5cafb6e82c42a14a579a3b9679cbaabec1a48cece4c"} Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.040763 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41362a98d94bdf382d4eb5cafb6e82c42a14a579a3b9679cbaabec1a48cece4c" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.040789 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rw26g" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.193580 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc"] Jan 26 16:09:31 crc kubenswrapper[4713]: E0126 16:09:31.194147 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8bca2b-2bb6-40b8-af2d-218b11e04cec" containerName="extract-utilities" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.194171 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8bca2b-2bb6-40b8-af2d-218b11e04cec" containerName="extract-utilities" Jan 26 16:09:31 crc kubenswrapper[4713]: E0126 16:09:31.194184 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3327a99-89b1-4901-b833-6c6c915839cb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.194195 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3327a99-89b1-4901-b833-6c6c915839cb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:09:31 crc kubenswrapper[4713]: E0126 16:09:31.194218 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8bca2b-2bb6-40b8-af2d-218b11e04cec" containerName="extract-content" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.194227 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8bca2b-2bb6-40b8-af2d-218b11e04cec" containerName="extract-content" Jan 26 16:09:31 crc kubenswrapper[4713]: E0126 16:09:31.194261 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8bca2b-2bb6-40b8-af2d-218b11e04cec" containerName="registry-server" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.194269 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8bca2b-2bb6-40b8-af2d-218b11e04cec" containerName="registry-server" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.194538 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8bca2b-2bb6-40b8-af2d-218b11e04cec" containerName="registry-server" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.194555 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3327a99-89b1-4901-b833-6c6c915839cb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.195482 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.198532 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.198602 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.198725 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.199074 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.205674 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc"] Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.254765 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a90da253-d811-48ca-be82-642679ec25b9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v25bc\" (UID: \"a90da253-d811-48ca-be82-642679ec25b9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.254816 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5wcv\" (UniqueName: \"kubernetes.io/projected/a90da253-d811-48ca-be82-642679ec25b9-kube-api-access-z5wcv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v25bc\" (UID: \"a90da253-d811-48ca-be82-642679ec25b9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.254879 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a90da253-d811-48ca-be82-642679ec25b9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v25bc\" (UID: \"a90da253-d811-48ca-be82-642679ec25b9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.356874 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a90da253-d811-48ca-be82-642679ec25b9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v25bc\" (UID: \"a90da253-d811-48ca-be82-642679ec25b9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.356946 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5wcv\" (UniqueName: \"kubernetes.io/projected/a90da253-d811-48ca-be82-642679ec25b9-kube-api-access-z5wcv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v25bc\" (UID: \"a90da253-d811-48ca-be82-642679ec25b9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.357082 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a90da253-d811-48ca-be82-642679ec25b9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v25bc\" (UID: \"a90da253-d811-48ca-be82-642679ec25b9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.360896 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a90da253-d811-48ca-be82-642679ec25b9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v25bc\" (UID: \"a90da253-d811-48ca-be82-642679ec25b9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.362572 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a90da253-d811-48ca-be82-642679ec25b9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v25bc\" (UID: \"a90da253-d811-48ca-be82-642679ec25b9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.376857 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5wcv\" (UniqueName: \"kubernetes.io/projected/a90da253-d811-48ca-be82-642679ec25b9-kube-api-access-z5wcv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v25bc\" (UID: \"a90da253-d811-48ca-be82-642679ec25b9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" Jan 26 16:09:31 crc kubenswrapper[4713]: I0126 16:09:31.517565 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" Jan 26 16:09:32 crc kubenswrapper[4713]: I0126 16:09:32.447684 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc"] Jan 26 16:09:32 crc kubenswrapper[4713]: W0126 16:09:32.449935 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda90da253_d811_48ca_be82_642679ec25b9.slice/crio-b6a80aade9866f301cc3d15558e8067b8865253be68322be6ac2b8fd35e66bbe WatchSource:0}: Error finding container b6a80aade9866f301cc3d15558e8067b8865253be68322be6ac2b8fd35e66bbe: Status 404 returned error can't find the container with id b6a80aade9866f301cc3d15558e8067b8865253be68322be6ac2b8fd35e66bbe Jan 26 16:09:33 crc kubenswrapper[4713]: I0126 16:09:33.377404 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" event={"ID":"a90da253-d811-48ca-be82-642679ec25b9","Type":"ContainerStarted","Data":"c3b33b38a4312cddf9416a92497f180a45cedc57b9e65c9d356ec1dd0445a05b"} Jan 26 16:09:33 crc kubenswrapper[4713]: I0126 16:09:33.377764 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" event={"ID":"a90da253-d811-48ca-be82-642679ec25b9","Type":"ContainerStarted","Data":"b6a80aade9866f301cc3d15558e8067b8865253be68322be6ac2b8fd35e66bbe"} Jan 26 16:09:33 crc kubenswrapper[4713]: I0126 16:09:33.413298 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" podStartSLOduration=1.979471931 podStartE2EDuration="2.413269947s" podCreationTimestamp="2026-01-26 16:09:31 +0000 UTC" firstStartedPulling="2026-01-26 16:09:32.452261864 +0000 UTC m=+2147.589279109" lastFinishedPulling="2026-01-26 16:09:32.88605988 +0000 UTC m=+2148.023077125" observedRunningTime="2026-01-26 16:09:33.399796293 +0000 UTC m=+2148.536813548" watchObservedRunningTime="2026-01-26 16:09:33.413269947 +0000 UTC m=+2148.550287212" Jan 26 16:09:37 crc kubenswrapper[4713]: I0126 16:09:37.059974 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-vzbkv"] Jan 26 16:09:37 crc kubenswrapper[4713]: I0126 16:09:37.069855 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-vzbkv"] Jan 26 16:09:37 crc kubenswrapper[4713]: I0126 16:09:37.816355 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4733cb21-61c8-40e3-af0a-7375dcc21851" path="/var/lib/kubelet/pods/4733cb21-61c8-40e3-af0a-7375dcc21851/volumes" Jan 26 16:10:20 crc kubenswrapper[4713]: I0126 16:10:20.388249 4713 scope.go:117] "RemoveContainer" containerID="9073c288aba187c168cc4a661ae13d21382ca8b11fc627d980fc774c707f1233" Jan 26 16:10:24 crc kubenswrapper[4713]: I0126 16:10:24.216526 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8jc4k"] Jan 26 16:10:24 crc kubenswrapper[4713]: I0126 16:10:24.221143 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:24 crc kubenswrapper[4713]: I0126 16:10:24.236388 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jc4k"] Jan 26 16:10:24 crc kubenswrapper[4713]: I0126 16:10:24.326519 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njr7b\" (UniqueName: \"kubernetes.io/projected/ea1a757a-e54d-4d72-b7fa-ed670cb056af-kube-api-access-njr7b\") pod \"redhat-marketplace-8jc4k\" (UID: \"ea1a757a-e54d-4d72-b7fa-ed670cb056af\") " pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:24 crc kubenswrapper[4713]: I0126 16:10:24.326826 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea1a757a-e54d-4d72-b7fa-ed670cb056af-catalog-content\") pod \"redhat-marketplace-8jc4k\" (UID: \"ea1a757a-e54d-4d72-b7fa-ed670cb056af\") " pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:24 crc kubenswrapper[4713]: I0126 16:10:24.327300 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea1a757a-e54d-4d72-b7fa-ed670cb056af-utilities\") pod \"redhat-marketplace-8jc4k\" (UID: \"ea1a757a-e54d-4d72-b7fa-ed670cb056af\") " pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:24 crc kubenswrapper[4713]: I0126 16:10:24.428912 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea1a757a-e54d-4d72-b7fa-ed670cb056af-utilities\") pod \"redhat-marketplace-8jc4k\" (UID: \"ea1a757a-e54d-4d72-b7fa-ed670cb056af\") " pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:24 crc kubenswrapper[4713]: I0126 16:10:24.429054 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njr7b\" (UniqueName: \"kubernetes.io/projected/ea1a757a-e54d-4d72-b7fa-ed670cb056af-kube-api-access-njr7b\") pod \"redhat-marketplace-8jc4k\" (UID: \"ea1a757a-e54d-4d72-b7fa-ed670cb056af\") " pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:24 crc kubenswrapper[4713]: I0126 16:10:24.429111 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea1a757a-e54d-4d72-b7fa-ed670cb056af-catalog-content\") pod \"redhat-marketplace-8jc4k\" (UID: \"ea1a757a-e54d-4d72-b7fa-ed670cb056af\") " pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:24 crc kubenswrapper[4713]: I0126 16:10:24.429720 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea1a757a-e54d-4d72-b7fa-ed670cb056af-catalog-content\") pod \"redhat-marketplace-8jc4k\" (UID: \"ea1a757a-e54d-4d72-b7fa-ed670cb056af\") " pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:24 crc kubenswrapper[4713]: I0126 16:10:24.429905 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea1a757a-e54d-4d72-b7fa-ed670cb056af-utilities\") pod \"redhat-marketplace-8jc4k\" (UID: \"ea1a757a-e54d-4d72-b7fa-ed670cb056af\") " pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:24 crc kubenswrapper[4713]: I0126 16:10:24.452590 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njr7b\" (UniqueName: \"kubernetes.io/projected/ea1a757a-e54d-4d72-b7fa-ed670cb056af-kube-api-access-njr7b\") pod \"redhat-marketplace-8jc4k\" (UID: \"ea1a757a-e54d-4d72-b7fa-ed670cb056af\") " pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:24 crc kubenswrapper[4713]: I0126 16:10:24.557742 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:24 crc kubenswrapper[4713]: I0126 16:10:24.973061 4713 generic.go:334] "Generic (PLEG): container finished" podID="a90da253-d811-48ca-be82-642679ec25b9" containerID="c3b33b38a4312cddf9416a92497f180a45cedc57b9e65c9d356ec1dd0445a05b" exitCode=0 Jan 26 16:10:24 crc kubenswrapper[4713]: I0126 16:10:24.973147 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" event={"ID":"a90da253-d811-48ca-be82-642679ec25b9","Type":"ContainerDied","Data":"c3b33b38a4312cddf9416a92497f180a45cedc57b9e65c9d356ec1dd0445a05b"} Jan 26 16:10:25 crc kubenswrapper[4713]: I0126 16:10:25.082759 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jc4k"] Jan 26 16:10:25 crc kubenswrapper[4713]: I0126 16:10:25.986675 4713 generic.go:334] "Generic (PLEG): container finished" podID="ea1a757a-e54d-4d72-b7fa-ed670cb056af" containerID="729ca0488acec5a481ba54f57c9c4147e933dcaac35312f916d4871690823fe2" exitCode=0 Jan 26 16:10:25 crc kubenswrapper[4713]: I0126 16:10:25.987438 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jc4k" event={"ID":"ea1a757a-e54d-4d72-b7fa-ed670cb056af","Type":"ContainerDied","Data":"729ca0488acec5a481ba54f57c9c4147e933dcaac35312f916d4871690823fe2"} Jan 26 16:10:25 crc kubenswrapper[4713]: I0126 16:10:25.987515 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jc4k" event={"ID":"ea1a757a-e54d-4d72-b7fa-ed670cb056af","Type":"ContainerStarted","Data":"6dc874dfb336a0b5da0d4dd23fbb39c8bf8cb011040aa15c89aeb4a34ec723af"} Jan 26 16:10:26 crc kubenswrapper[4713]: I0126 16:10:26.530059 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" Jan 26 16:10:26 crc kubenswrapper[4713]: I0126 16:10:26.673809 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a90da253-d811-48ca-be82-642679ec25b9-inventory\") pod \"a90da253-d811-48ca-be82-642679ec25b9\" (UID: \"a90da253-d811-48ca-be82-642679ec25b9\") " Jan 26 16:10:26 crc kubenswrapper[4713]: I0126 16:10:26.673937 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a90da253-d811-48ca-be82-642679ec25b9-ssh-key-openstack-edpm-ipam\") pod \"a90da253-d811-48ca-be82-642679ec25b9\" (UID: \"a90da253-d811-48ca-be82-642679ec25b9\") " Jan 26 16:10:26 crc kubenswrapper[4713]: I0126 16:10:26.673969 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5wcv\" (UniqueName: \"kubernetes.io/projected/a90da253-d811-48ca-be82-642679ec25b9-kube-api-access-z5wcv\") pod \"a90da253-d811-48ca-be82-642679ec25b9\" (UID: \"a90da253-d811-48ca-be82-642679ec25b9\") " Jan 26 16:10:26 crc kubenswrapper[4713]: I0126 16:10:26.679143 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a90da253-d811-48ca-be82-642679ec25b9-kube-api-access-z5wcv" (OuterVolumeSpecName: "kube-api-access-z5wcv") pod "a90da253-d811-48ca-be82-642679ec25b9" (UID: "a90da253-d811-48ca-be82-642679ec25b9"). InnerVolumeSpecName "kube-api-access-z5wcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:10:26 crc kubenswrapper[4713]: I0126 16:10:26.707963 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a90da253-d811-48ca-be82-642679ec25b9-inventory" (OuterVolumeSpecName: "inventory") pod "a90da253-d811-48ca-be82-642679ec25b9" (UID: "a90da253-d811-48ca-be82-642679ec25b9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:10:26 crc kubenswrapper[4713]: I0126 16:10:26.717594 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a90da253-d811-48ca-be82-642679ec25b9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a90da253-d811-48ca-be82-642679ec25b9" (UID: "a90da253-d811-48ca-be82-642679ec25b9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:10:26 crc kubenswrapper[4713]: I0126 16:10:26.776283 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a90da253-d811-48ca-be82-642679ec25b9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:26 crc kubenswrapper[4713]: I0126 16:10:26.776319 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5wcv\" (UniqueName: \"kubernetes.io/projected/a90da253-d811-48ca-be82-642679ec25b9-kube-api-access-z5wcv\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:26 crc kubenswrapper[4713]: I0126 16:10:26.776331 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a90da253-d811-48ca-be82-642679ec25b9-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:26 crc kubenswrapper[4713]: I0126 16:10:26.996310 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" event={"ID":"a90da253-d811-48ca-be82-642679ec25b9","Type":"ContainerDied","Data":"b6a80aade9866f301cc3d15558e8067b8865253be68322be6ac2b8fd35e66bbe"} Jan 26 16:10:26 crc kubenswrapper[4713]: I0126 16:10:26.997283 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6a80aade9866f301cc3d15558e8067b8865253be68322be6ac2b8fd35e66bbe" Jan 26 16:10:26 crc kubenswrapper[4713]: I0126 16:10:26.997412 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v25bc" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.000839 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jc4k" event={"ID":"ea1a757a-e54d-4d72-b7fa-ed670cb056af","Type":"ContainerStarted","Data":"485a6d381a2d562741325c43ee2eac58689788f96bfc64f08c57386046d3ba50"} Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.106042 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9tw4m"] Jan 26 16:10:27 crc kubenswrapper[4713]: E0126 16:10:27.106861 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90da253-d811-48ca-be82-642679ec25b9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.106894 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90da253-d811-48ca-be82-642679ec25b9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.107240 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="a90da253-d811-48ca-be82-642679ec25b9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.110316 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.113776 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.115303 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.115633 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.116162 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.124095 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9tw4m"] Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.183633 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2p6r\" (UniqueName: \"kubernetes.io/projected/f48dd46c-3b9a-484b-887f-e916f70a7123-kube-api-access-n2p6r\") pod \"ssh-known-hosts-edpm-deployment-9tw4m\" (UID: \"f48dd46c-3b9a-484b-887f-e916f70a7123\") " pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.183732 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f48dd46c-3b9a-484b-887f-e916f70a7123-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9tw4m\" (UID: \"f48dd46c-3b9a-484b-887f-e916f70a7123\") " pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.183929 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f48dd46c-3b9a-484b-887f-e916f70a7123-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9tw4m\" (UID: \"f48dd46c-3b9a-484b-887f-e916f70a7123\") " pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.285471 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2p6r\" (UniqueName: \"kubernetes.io/projected/f48dd46c-3b9a-484b-887f-e916f70a7123-kube-api-access-n2p6r\") pod \"ssh-known-hosts-edpm-deployment-9tw4m\" (UID: \"f48dd46c-3b9a-484b-887f-e916f70a7123\") " pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.285553 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f48dd46c-3b9a-484b-887f-e916f70a7123-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9tw4m\" (UID: \"f48dd46c-3b9a-484b-887f-e916f70a7123\") " pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.285737 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f48dd46c-3b9a-484b-887f-e916f70a7123-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9tw4m\" (UID: \"f48dd46c-3b9a-484b-887f-e916f70a7123\") " pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.290080 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f48dd46c-3b9a-484b-887f-e916f70a7123-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9tw4m\" (UID: \"f48dd46c-3b9a-484b-887f-e916f70a7123\") " pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.290140 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f48dd46c-3b9a-484b-887f-e916f70a7123-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9tw4m\" (UID: \"f48dd46c-3b9a-484b-887f-e916f70a7123\") " pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.303685 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2p6r\" (UniqueName: \"kubernetes.io/projected/f48dd46c-3b9a-484b-887f-e916f70a7123-kube-api-access-n2p6r\") pod \"ssh-known-hosts-edpm-deployment-9tw4m\" (UID: \"f48dd46c-3b9a-484b-887f-e916f70a7123\") " pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" Jan 26 16:10:27 crc kubenswrapper[4713]: I0126 16:10:27.429678 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" Jan 26 16:10:28 crc kubenswrapper[4713]: I0126 16:10:28.012732 4713 generic.go:334] "Generic (PLEG): container finished" podID="ea1a757a-e54d-4d72-b7fa-ed670cb056af" containerID="485a6d381a2d562741325c43ee2eac58689788f96bfc64f08c57386046d3ba50" exitCode=0 Jan 26 16:10:28 crc kubenswrapper[4713]: I0126 16:10:28.013048 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jc4k" event={"ID":"ea1a757a-e54d-4d72-b7fa-ed670cb056af","Type":"ContainerDied","Data":"485a6d381a2d562741325c43ee2eac58689788f96bfc64f08c57386046d3ba50"} Jan 26 16:10:28 crc kubenswrapper[4713]: I0126 16:10:28.057066 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9tw4m"] Jan 26 16:10:28 crc kubenswrapper[4713]: W0126 16:10:28.058834 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf48dd46c_3b9a_484b_887f_e916f70a7123.slice/crio-ed7930db3703f65c0317137946b8ca42877e814295d4de8b58f8f6c817784fec WatchSource:0}: Error finding container ed7930db3703f65c0317137946b8ca42877e814295d4de8b58f8f6c817784fec: Status 404 returned error can't find the container with id ed7930db3703f65c0317137946b8ca42877e814295d4de8b58f8f6c817784fec Jan 26 16:10:29 crc kubenswrapper[4713]: I0126 16:10:29.023663 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" event={"ID":"f48dd46c-3b9a-484b-887f-e916f70a7123","Type":"ContainerStarted","Data":"67cdde360ef5a8f5410de1207107427b63be992f4bcf5d280f89a211c02c532c"} Jan 26 16:10:29 crc kubenswrapper[4713]: I0126 16:10:29.023989 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" event={"ID":"f48dd46c-3b9a-484b-887f-e916f70a7123","Type":"ContainerStarted","Data":"ed7930db3703f65c0317137946b8ca42877e814295d4de8b58f8f6c817784fec"} Jan 26 16:10:29 crc kubenswrapper[4713]: I0126 16:10:29.027429 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jc4k" event={"ID":"ea1a757a-e54d-4d72-b7fa-ed670cb056af","Type":"ContainerStarted","Data":"31afdf986645053dd81194fe4b0451af04a4bdb853cde358b8a94a063420498c"} Jan 26 16:10:29 crc kubenswrapper[4713]: I0126 16:10:29.041157 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" podStartSLOduration=1.534835003 podStartE2EDuration="2.041126975s" podCreationTimestamp="2026-01-26 16:10:27 +0000 UTC" firstStartedPulling="2026-01-26 16:10:28.061606693 +0000 UTC m=+2203.198623938" lastFinishedPulling="2026-01-26 16:10:28.567898665 +0000 UTC m=+2203.704915910" observedRunningTime="2026-01-26 16:10:29.036632546 +0000 UTC m=+2204.173649781" watchObservedRunningTime="2026-01-26 16:10:29.041126975 +0000 UTC m=+2204.178144210" Jan 26 16:10:29 crc kubenswrapper[4713]: I0126 16:10:29.058291 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8jc4k" podStartSLOduration=2.606262859 podStartE2EDuration="5.058276223s" podCreationTimestamp="2026-01-26 16:10:24 +0000 UTC" firstStartedPulling="2026-01-26 16:10:25.990658992 +0000 UTC m=+2201.127676237" lastFinishedPulling="2026-01-26 16:10:28.442672366 +0000 UTC m=+2203.579689601" observedRunningTime="2026-01-26 16:10:29.051437018 +0000 UTC m=+2204.188454253" watchObservedRunningTime="2026-01-26 16:10:29.058276223 +0000 UTC m=+2204.195293458" Jan 26 16:10:33 crc kubenswrapper[4713]: I0126 16:10:33.047471 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-v6rx9"] Jan 26 16:10:33 crc kubenswrapper[4713]: I0126 16:10:33.058258 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-v6rx9"] Jan 26 16:10:33 crc kubenswrapper[4713]: I0126 16:10:33.815894 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e835b410-79c4-401a-8406-77f0df484466" path="/var/lib/kubelet/pods/e835b410-79c4-401a-8406-77f0df484466/volumes" Jan 26 16:10:34 crc kubenswrapper[4713]: I0126 16:10:34.559343 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:34 crc kubenswrapper[4713]: I0126 16:10:34.559412 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:34 crc kubenswrapper[4713]: I0126 16:10:34.632914 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:35 crc kubenswrapper[4713]: I0126 16:10:35.130109 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:35 crc kubenswrapper[4713]: I0126 16:10:35.183798 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jc4k"] Jan 26 16:10:36 crc kubenswrapper[4713]: I0126 16:10:36.095279 4713 generic.go:334] "Generic (PLEG): container finished" podID="f48dd46c-3b9a-484b-887f-e916f70a7123" containerID="67cdde360ef5a8f5410de1207107427b63be992f4bcf5d280f89a211c02c532c" exitCode=0 Jan 26 16:10:36 crc kubenswrapper[4713]: I0126 16:10:36.095436 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" event={"ID":"f48dd46c-3b9a-484b-887f-e916f70a7123","Type":"ContainerDied","Data":"67cdde360ef5a8f5410de1207107427b63be992f4bcf5d280f89a211c02c532c"} Jan 26 16:10:37 crc kubenswrapper[4713]: I0126 16:10:37.104578 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8jc4k" podUID="ea1a757a-e54d-4d72-b7fa-ed670cb056af" containerName="registry-server" containerID="cri-o://31afdf986645053dd81194fe4b0451af04a4bdb853cde358b8a94a063420498c" gracePeriod=2 Jan 26 16:10:37 crc kubenswrapper[4713]: I0126 16:10:37.778385 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" Jan 26 16:10:37 crc kubenswrapper[4713]: I0126 16:10:37.790236 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:37 crc kubenswrapper[4713]: I0126 16:10:37.945263 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2p6r\" (UniqueName: \"kubernetes.io/projected/f48dd46c-3b9a-484b-887f-e916f70a7123-kube-api-access-n2p6r\") pod \"f48dd46c-3b9a-484b-887f-e916f70a7123\" (UID: \"f48dd46c-3b9a-484b-887f-e916f70a7123\") " Jan 26 16:10:37 crc kubenswrapper[4713]: I0126 16:10:37.945781 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f48dd46c-3b9a-484b-887f-e916f70a7123-ssh-key-openstack-edpm-ipam\") pod \"f48dd46c-3b9a-484b-887f-e916f70a7123\" (UID: \"f48dd46c-3b9a-484b-887f-e916f70a7123\") " Jan 26 16:10:37 crc kubenswrapper[4713]: I0126 16:10:37.945946 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njr7b\" (UniqueName: \"kubernetes.io/projected/ea1a757a-e54d-4d72-b7fa-ed670cb056af-kube-api-access-njr7b\") pod \"ea1a757a-e54d-4d72-b7fa-ed670cb056af\" (UID: \"ea1a757a-e54d-4d72-b7fa-ed670cb056af\") " Jan 26 16:10:37 crc kubenswrapper[4713]: I0126 16:10:37.946101 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea1a757a-e54d-4d72-b7fa-ed670cb056af-utilities\") pod \"ea1a757a-e54d-4d72-b7fa-ed670cb056af\" (UID: \"ea1a757a-e54d-4d72-b7fa-ed670cb056af\") " Jan 26 16:10:37 crc kubenswrapper[4713]: I0126 16:10:37.946481 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f48dd46c-3b9a-484b-887f-e916f70a7123-inventory-0\") pod \"f48dd46c-3b9a-484b-887f-e916f70a7123\" (UID: \"f48dd46c-3b9a-484b-887f-e916f70a7123\") " Jan 26 16:10:37 crc kubenswrapper[4713]: I0126 16:10:37.946799 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea1a757a-e54d-4d72-b7fa-ed670cb056af-catalog-content\") pod \"ea1a757a-e54d-4d72-b7fa-ed670cb056af\" (UID: \"ea1a757a-e54d-4d72-b7fa-ed670cb056af\") " Jan 26 16:10:37 crc kubenswrapper[4713]: I0126 16:10:37.949635 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea1a757a-e54d-4d72-b7fa-ed670cb056af-utilities" (OuterVolumeSpecName: "utilities") pod "ea1a757a-e54d-4d72-b7fa-ed670cb056af" (UID: "ea1a757a-e54d-4d72-b7fa-ed670cb056af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:10:37 crc kubenswrapper[4713]: I0126 16:10:37.951686 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f48dd46c-3b9a-484b-887f-e916f70a7123-kube-api-access-n2p6r" (OuterVolumeSpecName: "kube-api-access-n2p6r") pod "f48dd46c-3b9a-484b-887f-e916f70a7123" (UID: "f48dd46c-3b9a-484b-887f-e916f70a7123"). InnerVolumeSpecName "kube-api-access-n2p6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:10:37 crc kubenswrapper[4713]: I0126 16:10:37.953613 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea1a757a-e54d-4d72-b7fa-ed670cb056af-kube-api-access-njr7b" (OuterVolumeSpecName: "kube-api-access-njr7b") pod "ea1a757a-e54d-4d72-b7fa-ed670cb056af" (UID: "ea1a757a-e54d-4d72-b7fa-ed670cb056af"). InnerVolumeSpecName "kube-api-access-njr7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:10:37 crc kubenswrapper[4713]: I0126 16:10:37.979226 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea1a757a-e54d-4d72-b7fa-ed670cb056af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea1a757a-e54d-4d72-b7fa-ed670cb056af" (UID: "ea1a757a-e54d-4d72-b7fa-ed670cb056af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:10:37 crc kubenswrapper[4713]: I0126 16:10:37.982616 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f48dd46c-3b9a-484b-887f-e916f70a7123-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "f48dd46c-3b9a-484b-887f-e916f70a7123" (UID: "f48dd46c-3b9a-484b-887f-e916f70a7123"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:10:37 crc kubenswrapper[4713]: I0126 16:10:37.985540 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f48dd46c-3b9a-484b-887f-e916f70a7123-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f48dd46c-3b9a-484b-887f-e916f70a7123" (UID: "f48dd46c-3b9a-484b-887f-e916f70a7123"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.048985 4713 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f48dd46c-3b9a-484b-887f-e916f70a7123-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.049025 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea1a757a-e54d-4d72-b7fa-ed670cb056af-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.049040 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2p6r\" (UniqueName: \"kubernetes.io/projected/f48dd46c-3b9a-484b-887f-e916f70a7123-kube-api-access-n2p6r\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.049053 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f48dd46c-3b9a-484b-887f-e916f70a7123-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.049066 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njr7b\" (UniqueName: \"kubernetes.io/projected/ea1a757a-e54d-4d72-b7fa-ed670cb056af-kube-api-access-njr7b\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.049077 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea1a757a-e54d-4d72-b7fa-ed670cb056af-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.115093 4713 generic.go:334] "Generic (PLEG): container finished" podID="ea1a757a-e54d-4d72-b7fa-ed670cb056af" containerID="31afdf986645053dd81194fe4b0451af04a4bdb853cde358b8a94a063420498c" exitCode=0 Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.115144 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jc4k" event={"ID":"ea1a757a-e54d-4d72-b7fa-ed670cb056af","Type":"ContainerDied","Data":"31afdf986645053dd81194fe4b0451af04a4bdb853cde358b8a94a063420498c"} Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.115184 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jc4k" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.115206 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jc4k" event={"ID":"ea1a757a-e54d-4d72-b7fa-ed670cb056af","Type":"ContainerDied","Data":"6dc874dfb336a0b5da0d4dd23fbb39c8bf8cb011040aa15c89aeb4a34ec723af"} Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.115224 4713 scope.go:117] "RemoveContainer" containerID="31afdf986645053dd81194fe4b0451af04a4bdb853cde358b8a94a063420498c" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.116640 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" event={"ID":"f48dd46c-3b9a-484b-887f-e916f70a7123","Type":"ContainerDied","Data":"ed7930db3703f65c0317137946b8ca42877e814295d4de8b58f8f6c817784fec"} Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.116680 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed7930db3703f65c0317137946b8ca42877e814295d4de8b58f8f6c817784fec" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.117712 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9tw4m" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.142839 4713 scope.go:117] "RemoveContainer" containerID="485a6d381a2d562741325c43ee2eac58689788f96bfc64f08c57386046d3ba50" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.151876 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jc4k"] Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.163163 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jc4k"] Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.194788 4713 scope.go:117] "RemoveContainer" containerID="729ca0488acec5a481ba54f57c9c4147e933dcaac35312f916d4871690823fe2" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.227570 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq"] Jan 26 16:10:38 crc kubenswrapper[4713]: E0126 16:10:38.227976 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1a757a-e54d-4d72-b7fa-ed670cb056af" containerName="extract-content" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.227995 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1a757a-e54d-4d72-b7fa-ed670cb056af" containerName="extract-content" Jan 26 16:10:38 crc kubenswrapper[4713]: E0126 16:10:38.228010 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48dd46c-3b9a-484b-887f-e916f70a7123" containerName="ssh-known-hosts-edpm-deployment" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.228016 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48dd46c-3b9a-484b-887f-e916f70a7123" containerName="ssh-known-hosts-edpm-deployment" Jan 26 16:10:38 crc kubenswrapper[4713]: E0126 16:10:38.228039 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1a757a-e54d-4d72-b7fa-ed670cb056af" containerName="extract-utilities" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.228047 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1a757a-e54d-4d72-b7fa-ed670cb056af" containerName="extract-utilities" Jan 26 16:10:38 crc kubenswrapper[4713]: E0126 16:10:38.228066 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1a757a-e54d-4d72-b7fa-ed670cb056af" containerName="registry-server" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.228074 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1a757a-e54d-4d72-b7fa-ed670cb056af" containerName="registry-server" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.228296 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea1a757a-e54d-4d72-b7fa-ed670cb056af" containerName="registry-server" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.228320 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f48dd46c-3b9a-484b-887f-e916f70a7123" containerName="ssh-known-hosts-edpm-deployment" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.229767 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.235498 4713 scope.go:117] "RemoveContainer" containerID="31afdf986645053dd81194fe4b0451af04a4bdb853cde358b8a94a063420498c" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.236955 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.237005 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.237054 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.237104 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:10:38 crc kubenswrapper[4713]: E0126 16:10:38.240546 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31afdf986645053dd81194fe4b0451af04a4bdb853cde358b8a94a063420498c\": container with ID starting with 31afdf986645053dd81194fe4b0451af04a4bdb853cde358b8a94a063420498c not found: ID does not exist" containerID="31afdf986645053dd81194fe4b0451af04a4bdb853cde358b8a94a063420498c" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.240586 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31afdf986645053dd81194fe4b0451af04a4bdb853cde358b8a94a063420498c"} err="failed to get container status \"31afdf986645053dd81194fe4b0451af04a4bdb853cde358b8a94a063420498c\": rpc error: code = NotFound desc = could not find container \"31afdf986645053dd81194fe4b0451af04a4bdb853cde358b8a94a063420498c\": container with ID starting with 31afdf986645053dd81194fe4b0451af04a4bdb853cde358b8a94a063420498c not found: ID does not exist" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.240608 4713 scope.go:117] "RemoveContainer" containerID="485a6d381a2d562741325c43ee2eac58689788f96bfc64f08c57386046d3ba50" Jan 26 16:10:38 crc kubenswrapper[4713]: E0126 16:10:38.241125 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"485a6d381a2d562741325c43ee2eac58689788f96bfc64f08c57386046d3ba50\": container with ID starting with 485a6d381a2d562741325c43ee2eac58689788f96bfc64f08c57386046d3ba50 not found: ID does not exist" containerID="485a6d381a2d562741325c43ee2eac58689788f96bfc64f08c57386046d3ba50" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.241147 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"485a6d381a2d562741325c43ee2eac58689788f96bfc64f08c57386046d3ba50"} err="failed to get container status \"485a6d381a2d562741325c43ee2eac58689788f96bfc64f08c57386046d3ba50\": rpc error: code = NotFound desc = could not find container \"485a6d381a2d562741325c43ee2eac58689788f96bfc64f08c57386046d3ba50\": container with ID starting with 485a6d381a2d562741325c43ee2eac58689788f96bfc64f08c57386046d3ba50 not found: ID does not exist" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.241159 4713 scope.go:117] "RemoveContainer" containerID="729ca0488acec5a481ba54f57c9c4147e933dcaac35312f916d4871690823fe2" Jan 26 16:10:38 crc kubenswrapper[4713]: E0126 16:10:38.241328 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"729ca0488acec5a481ba54f57c9c4147e933dcaac35312f916d4871690823fe2\": container with ID starting with 729ca0488acec5a481ba54f57c9c4147e933dcaac35312f916d4871690823fe2 not found: ID does not exist" containerID="729ca0488acec5a481ba54f57c9c4147e933dcaac35312f916d4871690823fe2" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.241349 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"729ca0488acec5a481ba54f57c9c4147e933dcaac35312f916d4871690823fe2"} err="failed to get container status \"729ca0488acec5a481ba54f57c9c4147e933dcaac35312f916d4871690823fe2\": rpc error: code = NotFound desc = could not find container \"729ca0488acec5a481ba54f57c9c4147e933dcaac35312f916d4871690823fe2\": container with ID starting with 729ca0488acec5a481ba54f57c9c4147e933dcaac35312f916d4871690823fe2 not found: ID does not exist" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.251018 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq"] Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.252404 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f846eed5-7039-4b1b-b45f-ca6363c482a5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94nfq\" (UID: \"f846eed5-7039-4b1b-b45f-ca6363c482a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.252436 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnjwj\" (UniqueName: \"kubernetes.io/projected/f846eed5-7039-4b1b-b45f-ca6363c482a5-kube-api-access-bnjwj\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94nfq\" (UID: \"f846eed5-7039-4b1b-b45f-ca6363c482a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.252462 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f846eed5-7039-4b1b-b45f-ca6363c482a5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94nfq\" (UID: \"f846eed5-7039-4b1b-b45f-ca6363c482a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.355862 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f846eed5-7039-4b1b-b45f-ca6363c482a5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94nfq\" (UID: \"f846eed5-7039-4b1b-b45f-ca6363c482a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.356153 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnjwj\" (UniqueName: \"kubernetes.io/projected/f846eed5-7039-4b1b-b45f-ca6363c482a5-kube-api-access-bnjwj\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94nfq\" (UID: \"f846eed5-7039-4b1b-b45f-ca6363c482a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.356279 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f846eed5-7039-4b1b-b45f-ca6363c482a5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94nfq\" (UID: \"f846eed5-7039-4b1b-b45f-ca6363c482a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.361931 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f846eed5-7039-4b1b-b45f-ca6363c482a5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94nfq\" (UID: \"f846eed5-7039-4b1b-b45f-ca6363c482a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.363316 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f846eed5-7039-4b1b-b45f-ca6363c482a5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94nfq\" (UID: \"f846eed5-7039-4b1b-b45f-ca6363c482a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.373397 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnjwj\" (UniqueName: \"kubernetes.io/projected/f846eed5-7039-4b1b-b45f-ca6363c482a5-kube-api-access-bnjwj\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94nfq\" (UID: \"f846eed5-7039-4b1b-b45f-ca6363c482a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" Jan 26 16:10:38 crc kubenswrapper[4713]: I0126 16:10:38.584306 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" Jan 26 16:10:39 crc kubenswrapper[4713]: I0126 16:10:39.044751 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-4k84d"] Jan 26 16:10:39 crc kubenswrapper[4713]: I0126 16:10:39.056496 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-4k84d"] Jan 26 16:10:39 crc kubenswrapper[4713]: I0126 16:10:39.281957 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq"] Jan 26 16:10:39 crc kubenswrapper[4713]: I0126 16:10:39.826736 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="854b9c7b-7ba2-4909-8a82-3f927c3b28c0" path="/var/lib/kubelet/pods/854b9c7b-7ba2-4909-8a82-3f927c3b28c0/volumes" Jan 26 16:10:39 crc kubenswrapper[4713]: I0126 16:10:39.829578 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea1a757a-e54d-4d72-b7fa-ed670cb056af" path="/var/lib/kubelet/pods/ea1a757a-e54d-4d72-b7fa-ed670cb056af/volumes" Jan 26 16:10:40 crc kubenswrapper[4713]: I0126 16:10:40.139570 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" event={"ID":"f846eed5-7039-4b1b-b45f-ca6363c482a5","Type":"ContainerStarted","Data":"cc4401f01245c6a04767c37c2d0f86a61165ead1c785aaab63cb8785c855cadf"} Jan 26 16:10:41 crc kubenswrapper[4713]: I0126 16:10:41.150397 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" event={"ID":"f846eed5-7039-4b1b-b45f-ca6363c482a5","Type":"ContainerStarted","Data":"db7ba9c5a7bc1a118d132ed9f35b8b2372dcab7b13c561fcd9e0627fe5d101eb"} Jan 26 16:10:41 crc kubenswrapper[4713]: I0126 16:10:41.165993 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" podStartSLOduration=2.560559239 podStartE2EDuration="3.165973567s" podCreationTimestamp="2026-01-26 16:10:38 +0000 UTC" firstStartedPulling="2026-01-26 16:10:39.282464199 +0000 UTC m=+2214.419481434" lastFinishedPulling="2026-01-26 16:10:39.887878527 +0000 UTC m=+2215.024895762" observedRunningTime="2026-01-26 16:10:41.165157024 +0000 UTC m=+2216.302174269" watchObservedRunningTime="2026-01-26 16:10:41.165973567 +0000 UTC m=+2216.302990812" Jan 26 16:10:49 crc kubenswrapper[4713]: I0126 16:10:49.247661 4713 generic.go:334] "Generic (PLEG): container finished" podID="f846eed5-7039-4b1b-b45f-ca6363c482a5" containerID="db7ba9c5a7bc1a118d132ed9f35b8b2372dcab7b13c561fcd9e0627fe5d101eb" exitCode=0 Jan 26 16:10:49 crc kubenswrapper[4713]: I0126 16:10:49.247843 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" event={"ID":"f846eed5-7039-4b1b-b45f-ca6363c482a5","Type":"ContainerDied","Data":"db7ba9c5a7bc1a118d132ed9f35b8b2372dcab7b13c561fcd9e0627fe5d101eb"} Jan 26 16:10:50 crc kubenswrapper[4713]: I0126 16:10:50.847710 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.035408 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnjwj\" (UniqueName: \"kubernetes.io/projected/f846eed5-7039-4b1b-b45f-ca6363c482a5-kube-api-access-bnjwj\") pod \"f846eed5-7039-4b1b-b45f-ca6363c482a5\" (UID: \"f846eed5-7039-4b1b-b45f-ca6363c482a5\") " Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.035599 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f846eed5-7039-4b1b-b45f-ca6363c482a5-inventory\") pod \"f846eed5-7039-4b1b-b45f-ca6363c482a5\" (UID: \"f846eed5-7039-4b1b-b45f-ca6363c482a5\") " Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.035674 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f846eed5-7039-4b1b-b45f-ca6363c482a5-ssh-key-openstack-edpm-ipam\") pod \"f846eed5-7039-4b1b-b45f-ca6363c482a5\" (UID: \"f846eed5-7039-4b1b-b45f-ca6363c482a5\") " Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.048892 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f846eed5-7039-4b1b-b45f-ca6363c482a5-kube-api-access-bnjwj" (OuterVolumeSpecName: "kube-api-access-bnjwj") pod "f846eed5-7039-4b1b-b45f-ca6363c482a5" (UID: "f846eed5-7039-4b1b-b45f-ca6363c482a5"). InnerVolumeSpecName "kube-api-access-bnjwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.090294 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f846eed5-7039-4b1b-b45f-ca6363c482a5-inventory" (OuterVolumeSpecName: "inventory") pod "f846eed5-7039-4b1b-b45f-ca6363c482a5" (UID: "f846eed5-7039-4b1b-b45f-ca6363c482a5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.094335 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f846eed5-7039-4b1b-b45f-ca6363c482a5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f846eed5-7039-4b1b-b45f-ca6363c482a5" (UID: "f846eed5-7039-4b1b-b45f-ca6363c482a5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.138608 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f846eed5-7039-4b1b-b45f-ca6363c482a5-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.138647 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f846eed5-7039-4b1b-b45f-ca6363c482a5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.138660 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnjwj\" (UniqueName: \"kubernetes.io/projected/f846eed5-7039-4b1b-b45f-ca6363c482a5-kube-api-access-bnjwj\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.271850 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" event={"ID":"f846eed5-7039-4b1b-b45f-ca6363c482a5","Type":"ContainerDied","Data":"cc4401f01245c6a04767c37c2d0f86a61165ead1c785aaab63cb8785c855cadf"} Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.272223 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc4401f01245c6a04767c37c2d0f86a61165ead1c785aaab63cb8785c855cadf" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.271942 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94nfq" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.376816 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s"] Jan 26 16:10:51 crc kubenswrapper[4713]: E0126 16:10:51.377262 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f846eed5-7039-4b1b-b45f-ca6363c482a5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.377279 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f846eed5-7039-4b1b-b45f-ca6363c482a5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.377536 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f846eed5-7039-4b1b-b45f-ca6363c482a5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.378482 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.389125 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s"] Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.420038 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.420289 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.420445 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.420675 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.548229 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqt79\" (UniqueName: \"kubernetes.io/projected/3b5b4774-7255-4b3d-ade6-994be4687006-kube-api-access-fqt79\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s\" (UID: \"3b5b4774-7255-4b3d-ade6-994be4687006\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.548592 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b5b4774-7255-4b3d-ade6-994be4687006-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s\" (UID: \"3b5b4774-7255-4b3d-ade6-994be4687006\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.548707 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b5b4774-7255-4b3d-ade6-994be4687006-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s\" (UID: \"3b5b4774-7255-4b3d-ade6-994be4687006\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.650286 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqt79\" (UniqueName: \"kubernetes.io/projected/3b5b4774-7255-4b3d-ade6-994be4687006-kube-api-access-fqt79\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s\" (UID: \"3b5b4774-7255-4b3d-ade6-994be4687006\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.650462 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b5b4774-7255-4b3d-ade6-994be4687006-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s\" (UID: \"3b5b4774-7255-4b3d-ade6-994be4687006\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.650511 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b5b4774-7255-4b3d-ade6-994be4687006-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s\" (UID: \"3b5b4774-7255-4b3d-ade6-994be4687006\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.654656 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b5b4774-7255-4b3d-ade6-994be4687006-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s\" (UID: \"3b5b4774-7255-4b3d-ade6-994be4687006\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.655625 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b5b4774-7255-4b3d-ade6-994be4687006-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s\" (UID: \"3b5b4774-7255-4b3d-ade6-994be4687006\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.668348 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqt79\" (UniqueName: \"kubernetes.io/projected/3b5b4774-7255-4b3d-ade6-994be4687006-kube-api-access-fqt79\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s\" (UID: \"3b5b4774-7255-4b3d-ade6-994be4687006\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" Jan 26 16:10:51 crc kubenswrapper[4713]: I0126 16:10:51.749848 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" Jan 26 16:10:52 crc kubenswrapper[4713]: I0126 16:10:52.375395 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s"] Jan 26 16:10:53 crc kubenswrapper[4713]: I0126 16:10:53.300430 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" event={"ID":"3b5b4774-7255-4b3d-ade6-994be4687006","Type":"ContainerStarted","Data":"fb6afa4d016072c3c31e3796d5c57b892c603eaeb39305cc7eb7aaaaec1f4f62"} Jan 26 16:10:53 crc kubenswrapper[4713]: I0126 16:10:53.301246 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" event={"ID":"3b5b4774-7255-4b3d-ade6-994be4687006","Type":"ContainerStarted","Data":"85deb89bcf0f079fe2679c2725a00b948ece0a08af955927fe8feccaa7721d96"} Jan 26 16:10:53 crc kubenswrapper[4713]: I0126 16:10:53.336088 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" podStartSLOduration=1.866148046 podStartE2EDuration="2.336067411s" podCreationTimestamp="2026-01-26 16:10:51 +0000 UTC" firstStartedPulling="2026-01-26 16:10:52.403396755 +0000 UTC m=+2227.540413990" lastFinishedPulling="2026-01-26 16:10:52.87331611 +0000 UTC m=+2228.010333355" observedRunningTime="2026-01-26 16:10:53.322739631 +0000 UTC m=+2228.459756906" watchObservedRunningTime="2026-01-26 16:10:53.336067411 +0000 UTC m=+2228.473084646" Jan 26 16:11:03 crc kubenswrapper[4713]: I0126 16:11:03.302055 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:11:03 crc kubenswrapper[4713]: I0126 16:11:03.302908 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:11:03 crc kubenswrapper[4713]: I0126 16:11:03.428133 4713 generic.go:334] "Generic (PLEG): container finished" podID="3b5b4774-7255-4b3d-ade6-994be4687006" containerID="fb6afa4d016072c3c31e3796d5c57b892c603eaeb39305cc7eb7aaaaec1f4f62" exitCode=0 Jan 26 16:11:03 crc kubenswrapper[4713]: I0126 16:11:03.428221 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" event={"ID":"3b5b4774-7255-4b3d-ade6-994be4687006","Type":"ContainerDied","Data":"fb6afa4d016072c3c31e3796d5c57b892c603eaeb39305cc7eb7aaaaec1f4f62"} Jan 26 16:11:04 crc kubenswrapper[4713]: I0126 16:11:04.987218 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.079728 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b5b4774-7255-4b3d-ade6-994be4687006-ssh-key-openstack-edpm-ipam\") pod \"3b5b4774-7255-4b3d-ade6-994be4687006\" (UID: \"3b5b4774-7255-4b3d-ade6-994be4687006\") " Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.079837 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b5b4774-7255-4b3d-ade6-994be4687006-inventory\") pod \"3b5b4774-7255-4b3d-ade6-994be4687006\" (UID: \"3b5b4774-7255-4b3d-ade6-994be4687006\") " Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.079943 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqt79\" (UniqueName: \"kubernetes.io/projected/3b5b4774-7255-4b3d-ade6-994be4687006-kube-api-access-fqt79\") pod \"3b5b4774-7255-4b3d-ade6-994be4687006\" (UID: \"3b5b4774-7255-4b3d-ade6-994be4687006\") " Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.107569 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b5b4774-7255-4b3d-ade6-994be4687006-kube-api-access-fqt79" (OuterVolumeSpecName: "kube-api-access-fqt79") pod "3b5b4774-7255-4b3d-ade6-994be4687006" (UID: "3b5b4774-7255-4b3d-ade6-994be4687006"). InnerVolumeSpecName "kube-api-access-fqt79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.145543 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5b4774-7255-4b3d-ade6-994be4687006-inventory" (OuterVolumeSpecName: "inventory") pod "3b5b4774-7255-4b3d-ade6-994be4687006" (UID: "3b5b4774-7255-4b3d-ade6-994be4687006"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.165044 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5b4774-7255-4b3d-ade6-994be4687006-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b5b4774-7255-4b3d-ade6-994be4687006" (UID: "3b5b4774-7255-4b3d-ade6-994be4687006"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.182612 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b5b4774-7255-4b3d-ade6-994be4687006-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.182643 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b5b4774-7255-4b3d-ade6-994be4687006-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.182655 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqt79\" (UniqueName: \"kubernetes.io/projected/3b5b4774-7255-4b3d-ade6-994be4687006-kube-api-access-fqt79\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.449150 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" event={"ID":"3b5b4774-7255-4b3d-ade6-994be4687006","Type":"ContainerDied","Data":"85deb89bcf0f079fe2679c2725a00b948ece0a08af955927fe8feccaa7721d96"} Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.449188 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85deb89bcf0f079fe2679c2725a00b948ece0a08af955927fe8feccaa7721d96" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.449191 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.628936 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8"] Jan 26 16:11:05 crc kubenswrapper[4713]: E0126 16:11:05.629337 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b5b4774-7255-4b3d-ade6-994be4687006" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.629353 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b5b4774-7255-4b3d-ade6-994be4687006" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.629569 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b5b4774-7255-4b3d-ade6-994be4687006" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.630275 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.636388 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.636755 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.636813 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.636942 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.637045 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.637167 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.637276 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.637400 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.647307 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8"] Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.795926 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.796000 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.796218 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.796277 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.796478 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.796529 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.796573 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.796860 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.796922 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.796945 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxgqw\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-kube-api-access-cxgqw\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.796977 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.797096 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.797227 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.797284 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.898965 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.899024 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.899102 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.899127 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.899153 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.899179 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.899215 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.899311 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.899340 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.899390 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxgqw\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-kube-api-access-cxgqw\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.899417 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.899447 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.899488 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.899520 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.906121 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.908788 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.909346 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.909733 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.912026 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.913483 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.915378 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.916739 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.916969 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.920048 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.921381 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.930795 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxgqw\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-kube-api-access-cxgqw\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.931041 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.934216 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:05 crc kubenswrapper[4713]: I0126 16:11:05.955088 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:06 crc kubenswrapper[4713]: I0126 16:11:06.599138 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8"] Jan 26 16:11:07 crc kubenswrapper[4713]: I0126 16:11:07.472144 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" event={"ID":"c1e12c7f-4a67-4ef8-80c4-1c24f0269834","Type":"ContainerStarted","Data":"f2dad278ff2680127a1294bca803d194562f5b1bc6e5771768c2840a033ebe00"} Jan 26 16:11:08 crc kubenswrapper[4713]: I0126 16:11:08.482287 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" event={"ID":"c1e12c7f-4a67-4ef8-80c4-1c24f0269834","Type":"ContainerStarted","Data":"65bfc6601ef0a91e96a0be32ff2aa71b703ae759c4cc4a48df7f9f2714d205f9"} Jan 26 16:11:08 crc kubenswrapper[4713]: I0126 16:11:08.516528 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" podStartSLOduration=2.8456597820000002 podStartE2EDuration="3.516503994s" podCreationTimestamp="2026-01-26 16:11:05 +0000 UTC" firstStartedPulling="2026-01-26 16:11:06.601117576 +0000 UTC m=+2241.738134811" lastFinishedPulling="2026-01-26 16:11:07.271961748 +0000 UTC m=+2242.408979023" observedRunningTime="2026-01-26 16:11:08.505562882 +0000 UTC m=+2243.642580207" watchObservedRunningTime="2026-01-26 16:11:08.516503994 +0000 UTC m=+2243.653521239" Jan 26 16:11:17 crc kubenswrapper[4713]: I0126 16:11:17.689515 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qndmb"] Jan 26 16:11:17 crc kubenswrapper[4713]: I0126 16:11:17.692470 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:17 crc kubenswrapper[4713]: I0126 16:11:17.709044 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qndmb"] Jan 26 16:11:17 crc kubenswrapper[4713]: I0126 16:11:17.820323 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvvfg\" (UniqueName: \"kubernetes.io/projected/25fb0191-4714-45aa-a64e-72ae0ab50bf5-kube-api-access-qvvfg\") pod \"community-operators-qndmb\" (UID: \"25fb0191-4714-45aa-a64e-72ae0ab50bf5\") " pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:17 crc kubenswrapper[4713]: I0126 16:11:17.820429 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25fb0191-4714-45aa-a64e-72ae0ab50bf5-catalog-content\") pod \"community-operators-qndmb\" (UID: \"25fb0191-4714-45aa-a64e-72ae0ab50bf5\") " pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:17 crc kubenswrapper[4713]: I0126 16:11:17.821127 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25fb0191-4714-45aa-a64e-72ae0ab50bf5-utilities\") pod \"community-operators-qndmb\" (UID: \"25fb0191-4714-45aa-a64e-72ae0ab50bf5\") " pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:17 crc kubenswrapper[4713]: I0126 16:11:17.990525 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25fb0191-4714-45aa-a64e-72ae0ab50bf5-utilities\") pod \"community-operators-qndmb\" (UID: \"25fb0191-4714-45aa-a64e-72ae0ab50bf5\") " pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:17 crc kubenswrapper[4713]: I0126 16:11:17.990695 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvvfg\" (UniqueName: \"kubernetes.io/projected/25fb0191-4714-45aa-a64e-72ae0ab50bf5-kube-api-access-qvvfg\") pod \"community-operators-qndmb\" (UID: \"25fb0191-4714-45aa-a64e-72ae0ab50bf5\") " pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:17 crc kubenswrapper[4713]: I0126 16:11:17.990802 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25fb0191-4714-45aa-a64e-72ae0ab50bf5-catalog-content\") pod \"community-operators-qndmb\" (UID: \"25fb0191-4714-45aa-a64e-72ae0ab50bf5\") " pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:17 crc kubenswrapper[4713]: I0126 16:11:17.991316 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25fb0191-4714-45aa-a64e-72ae0ab50bf5-catalog-content\") pod \"community-operators-qndmb\" (UID: \"25fb0191-4714-45aa-a64e-72ae0ab50bf5\") " pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:17 crc kubenswrapper[4713]: I0126 16:11:17.991414 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25fb0191-4714-45aa-a64e-72ae0ab50bf5-utilities\") pod \"community-operators-qndmb\" (UID: \"25fb0191-4714-45aa-a64e-72ae0ab50bf5\") " pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:18 crc kubenswrapper[4713]: I0126 16:11:18.036275 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvvfg\" (UniqueName: \"kubernetes.io/projected/25fb0191-4714-45aa-a64e-72ae0ab50bf5-kube-api-access-qvvfg\") pod \"community-operators-qndmb\" (UID: \"25fb0191-4714-45aa-a64e-72ae0ab50bf5\") " pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:18 crc kubenswrapper[4713]: I0126 16:11:18.313347 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:18 crc kubenswrapper[4713]: I0126 16:11:18.757097 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qndmb"] Jan 26 16:11:19 crc kubenswrapper[4713]: I0126 16:11:19.597207 4713 generic.go:334] "Generic (PLEG): container finished" podID="25fb0191-4714-45aa-a64e-72ae0ab50bf5" containerID="8fb64de4a7fc71608910bc608a85994388526bd98694ec549a47ef4e9b836ce2" exitCode=0 Jan 26 16:11:19 crc kubenswrapper[4713]: I0126 16:11:19.597301 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qndmb" event={"ID":"25fb0191-4714-45aa-a64e-72ae0ab50bf5","Type":"ContainerDied","Data":"8fb64de4a7fc71608910bc608a85994388526bd98694ec549a47ef4e9b836ce2"} Jan 26 16:11:19 crc kubenswrapper[4713]: I0126 16:11:19.597552 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qndmb" event={"ID":"25fb0191-4714-45aa-a64e-72ae0ab50bf5","Type":"ContainerStarted","Data":"01cfd38344d4e4b92fdf991d1862c863e0b6c5de30a0f7967295b7fac425cbf1"} Jan 26 16:11:20 crc kubenswrapper[4713]: I0126 16:11:20.497888 4713 scope.go:117] "RemoveContainer" containerID="ca8603d341a3dead0aaaa6f87ad81604357fe5b89f4eff5d20a92b2239894b1f" Jan 26 16:11:20 crc kubenswrapper[4713]: I0126 16:11:20.533197 4713 scope.go:117] "RemoveContainer" containerID="4a7f7a654d58d52e24936131017b47df97177718b63e7ba206480d8c11cfc8ed" Jan 26 16:11:20 crc kubenswrapper[4713]: I0126 16:11:20.625716 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qndmb" event={"ID":"25fb0191-4714-45aa-a64e-72ae0ab50bf5","Type":"ContainerStarted","Data":"cfae0fe389ec5bd0aef3eae0d218bbd32a7ea6db58a03a7adf055eab4a225828"} Jan 26 16:11:22 crc kubenswrapper[4713]: I0126 16:11:22.673402 4713 generic.go:334] "Generic (PLEG): container finished" podID="25fb0191-4714-45aa-a64e-72ae0ab50bf5" containerID="cfae0fe389ec5bd0aef3eae0d218bbd32a7ea6db58a03a7adf055eab4a225828" exitCode=0 Jan 26 16:11:22 crc kubenswrapper[4713]: I0126 16:11:22.673890 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qndmb" event={"ID":"25fb0191-4714-45aa-a64e-72ae0ab50bf5","Type":"ContainerDied","Data":"cfae0fe389ec5bd0aef3eae0d218bbd32a7ea6db58a03a7adf055eab4a225828"} Jan 26 16:11:23 crc kubenswrapper[4713]: I0126 16:11:23.698306 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qndmb" event={"ID":"25fb0191-4714-45aa-a64e-72ae0ab50bf5","Type":"ContainerStarted","Data":"2896788030d9b2ac56b6640b6b84fbc03af496e30a3d0fb478dfe15b8cfc09f7"} Jan 26 16:11:23 crc kubenswrapper[4713]: I0126 16:11:23.730136 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qndmb" podStartSLOduration=3.216880408 podStartE2EDuration="6.730116701s" podCreationTimestamp="2026-01-26 16:11:17 +0000 UTC" firstStartedPulling="2026-01-26 16:11:19.599744997 +0000 UTC m=+2254.736762242" lastFinishedPulling="2026-01-26 16:11:23.11298129 +0000 UTC m=+2258.249998535" observedRunningTime="2026-01-26 16:11:23.722662549 +0000 UTC m=+2258.859679814" watchObservedRunningTime="2026-01-26 16:11:23.730116701 +0000 UTC m=+2258.867133936" Jan 26 16:11:28 crc kubenswrapper[4713]: I0126 16:11:28.313532 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:28 crc kubenswrapper[4713]: I0126 16:11:28.313959 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:28 crc kubenswrapper[4713]: I0126 16:11:28.389640 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:28 crc kubenswrapper[4713]: I0126 16:11:28.809252 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:28 crc kubenswrapper[4713]: I0126 16:11:28.864543 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qndmb"] Jan 26 16:11:30 crc kubenswrapper[4713]: I0126 16:11:30.769135 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qndmb" podUID="25fb0191-4714-45aa-a64e-72ae0ab50bf5" containerName="registry-server" containerID="cri-o://2896788030d9b2ac56b6640b6b84fbc03af496e30a3d0fb478dfe15b8cfc09f7" gracePeriod=2 Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.296837 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.400261 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvvfg\" (UniqueName: \"kubernetes.io/projected/25fb0191-4714-45aa-a64e-72ae0ab50bf5-kube-api-access-qvvfg\") pod \"25fb0191-4714-45aa-a64e-72ae0ab50bf5\" (UID: \"25fb0191-4714-45aa-a64e-72ae0ab50bf5\") " Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.400406 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25fb0191-4714-45aa-a64e-72ae0ab50bf5-utilities\") pod \"25fb0191-4714-45aa-a64e-72ae0ab50bf5\" (UID: \"25fb0191-4714-45aa-a64e-72ae0ab50bf5\") " Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.400638 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25fb0191-4714-45aa-a64e-72ae0ab50bf5-catalog-content\") pod \"25fb0191-4714-45aa-a64e-72ae0ab50bf5\" (UID: \"25fb0191-4714-45aa-a64e-72ae0ab50bf5\") " Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.455984 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25fb0191-4714-45aa-a64e-72ae0ab50bf5-utilities" (OuterVolumeSpecName: "utilities") pod "25fb0191-4714-45aa-a64e-72ae0ab50bf5" (UID: "25fb0191-4714-45aa-a64e-72ae0ab50bf5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.456187 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25fb0191-4714-45aa-a64e-72ae0ab50bf5-kube-api-access-qvvfg" (OuterVolumeSpecName: "kube-api-access-qvvfg") pod "25fb0191-4714-45aa-a64e-72ae0ab50bf5" (UID: "25fb0191-4714-45aa-a64e-72ae0ab50bf5"). InnerVolumeSpecName "kube-api-access-qvvfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.512923 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25fb0191-4714-45aa-a64e-72ae0ab50bf5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.512965 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvvfg\" (UniqueName: \"kubernetes.io/projected/25fb0191-4714-45aa-a64e-72ae0ab50bf5-kube-api-access-qvvfg\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.551265 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25fb0191-4714-45aa-a64e-72ae0ab50bf5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25fb0191-4714-45aa-a64e-72ae0ab50bf5" (UID: "25fb0191-4714-45aa-a64e-72ae0ab50bf5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.615186 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25fb0191-4714-45aa-a64e-72ae0ab50bf5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.782268 4713 generic.go:334] "Generic (PLEG): container finished" podID="25fb0191-4714-45aa-a64e-72ae0ab50bf5" containerID="2896788030d9b2ac56b6640b6b84fbc03af496e30a3d0fb478dfe15b8cfc09f7" exitCode=0 Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.782313 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qndmb" event={"ID":"25fb0191-4714-45aa-a64e-72ae0ab50bf5","Type":"ContainerDied","Data":"2896788030d9b2ac56b6640b6b84fbc03af496e30a3d0fb478dfe15b8cfc09f7"} Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.782338 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qndmb" event={"ID":"25fb0191-4714-45aa-a64e-72ae0ab50bf5","Type":"ContainerDied","Data":"01cfd38344d4e4b92fdf991d1862c863e0b6c5de30a0f7967295b7fac425cbf1"} Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.782356 4713 scope.go:117] "RemoveContainer" containerID="2896788030d9b2ac56b6640b6b84fbc03af496e30a3d0fb478dfe15b8cfc09f7" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.782394 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qndmb" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.825864 4713 scope.go:117] "RemoveContainer" containerID="cfae0fe389ec5bd0aef3eae0d218bbd32a7ea6db58a03a7adf055eab4a225828" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.828411 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qndmb"] Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.849353 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qndmb"] Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.859998 4713 scope.go:117] "RemoveContainer" containerID="8fb64de4a7fc71608910bc608a85994388526bd98694ec549a47ef4e9b836ce2" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.919553 4713 scope.go:117] "RemoveContainer" containerID="2896788030d9b2ac56b6640b6b84fbc03af496e30a3d0fb478dfe15b8cfc09f7" Jan 26 16:11:31 crc kubenswrapper[4713]: E0126 16:11:31.920019 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2896788030d9b2ac56b6640b6b84fbc03af496e30a3d0fb478dfe15b8cfc09f7\": container with ID starting with 2896788030d9b2ac56b6640b6b84fbc03af496e30a3d0fb478dfe15b8cfc09f7 not found: ID does not exist" containerID="2896788030d9b2ac56b6640b6b84fbc03af496e30a3d0fb478dfe15b8cfc09f7" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.920051 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2896788030d9b2ac56b6640b6b84fbc03af496e30a3d0fb478dfe15b8cfc09f7"} err="failed to get container status \"2896788030d9b2ac56b6640b6b84fbc03af496e30a3d0fb478dfe15b8cfc09f7\": rpc error: code = NotFound desc = could not find container \"2896788030d9b2ac56b6640b6b84fbc03af496e30a3d0fb478dfe15b8cfc09f7\": container with ID starting with 2896788030d9b2ac56b6640b6b84fbc03af496e30a3d0fb478dfe15b8cfc09f7 not found: ID does not exist" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.920072 4713 scope.go:117] "RemoveContainer" containerID="cfae0fe389ec5bd0aef3eae0d218bbd32a7ea6db58a03a7adf055eab4a225828" Jan 26 16:11:31 crc kubenswrapper[4713]: E0126 16:11:31.920458 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfae0fe389ec5bd0aef3eae0d218bbd32a7ea6db58a03a7adf055eab4a225828\": container with ID starting with cfae0fe389ec5bd0aef3eae0d218bbd32a7ea6db58a03a7adf055eab4a225828 not found: ID does not exist" containerID="cfae0fe389ec5bd0aef3eae0d218bbd32a7ea6db58a03a7adf055eab4a225828" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.920505 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfae0fe389ec5bd0aef3eae0d218bbd32a7ea6db58a03a7adf055eab4a225828"} err="failed to get container status \"cfae0fe389ec5bd0aef3eae0d218bbd32a7ea6db58a03a7adf055eab4a225828\": rpc error: code = NotFound desc = could not find container \"cfae0fe389ec5bd0aef3eae0d218bbd32a7ea6db58a03a7adf055eab4a225828\": container with ID starting with cfae0fe389ec5bd0aef3eae0d218bbd32a7ea6db58a03a7adf055eab4a225828 not found: ID does not exist" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.920560 4713 scope.go:117] "RemoveContainer" containerID="8fb64de4a7fc71608910bc608a85994388526bd98694ec549a47ef4e9b836ce2" Jan 26 16:11:31 crc kubenswrapper[4713]: E0126 16:11:31.920876 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fb64de4a7fc71608910bc608a85994388526bd98694ec549a47ef4e9b836ce2\": container with ID starting with 8fb64de4a7fc71608910bc608a85994388526bd98694ec549a47ef4e9b836ce2 not found: ID does not exist" containerID="8fb64de4a7fc71608910bc608a85994388526bd98694ec549a47ef4e9b836ce2" Jan 26 16:11:31 crc kubenswrapper[4713]: I0126 16:11:31.920901 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fb64de4a7fc71608910bc608a85994388526bd98694ec549a47ef4e9b836ce2"} err="failed to get container status \"8fb64de4a7fc71608910bc608a85994388526bd98694ec549a47ef4e9b836ce2\": rpc error: code = NotFound desc = could not find container \"8fb64de4a7fc71608910bc608a85994388526bd98694ec549a47ef4e9b836ce2\": container with ID starting with 8fb64de4a7fc71608910bc608a85994388526bd98694ec549a47ef4e9b836ce2 not found: ID does not exist" Jan 26 16:11:33 crc kubenswrapper[4713]: I0126 16:11:33.301852 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:11:33 crc kubenswrapper[4713]: I0126 16:11:33.302222 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:11:33 crc kubenswrapper[4713]: I0126 16:11:33.821726 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25fb0191-4714-45aa-a64e-72ae0ab50bf5" path="/var/lib/kubelet/pods/25fb0191-4714-45aa-a64e-72ae0ab50bf5/volumes" Jan 26 16:11:48 crc kubenswrapper[4713]: I0126 16:11:48.993918 4713 generic.go:334] "Generic (PLEG): container finished" podID="c1e12c7f-4a67-4ef8-80c4-1c24f0269834" containerID="65bfc6601ef0a91e96a0be32ff2aa71b703ae759c4cc4a48df7f9f2714d205f9" exitCode=0 Jan 26 16:11:48 crc kubenswrapper[4713]: I0126 16:11:48.994003 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" event={"ID":"c1e12c7f-4a67-4ef8-80c4-1c24f0269834","Type":"ContainerDied","Data":"65bfc6601ef0a91e96a0be32ff2aa71b703ae759c4cc4a48df7f9f2714d205f9"} Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.619687 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.691841 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-inventory\") pod \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.691934 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-telemetry-combined-ca-bundle\") pod \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.691983 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.692078 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-repo-setup-combined-ca-bundle\") pod \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.692120 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-ovn-default-certs-0\") pod \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.692146 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxgqw\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-kube-api-access-cxgqw\") pod \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.692196 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-nova-combined-ca-bundle\") pod \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.692245 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-ssh-key-openstack-edpm-ipam\") pod \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.692280 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-ovn-combined-ca-bundle\") pod \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.692303 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-bootstrap-combined-ca-bundle\") pod \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.692422 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.692470 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.692524 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-neutron-metadata-combined-ca-bundle\") pod \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.692559 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-libvirt-combined-ca-bundle\") pod \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\" (UID: \"c1e12c7f-4a67-4ef8-80c4-1c24f0269834\") " Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.698742 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c1e12c7f-4a67-4ef8-80c4-1c24f0269834" (UID: "c1e12c7f-4a67-4ef8-80c4-1c24f0269834"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.698738 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "c1e12c7f-4a67-4ef8-80c4-1c24f0269834" (UID: "c1e12c7f-4a67-4ef8-80c4-1c24f0269834"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.699652 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c1e12c7f-4a67-4ef8-80c4-1c24f0269834" (UID: "c1e12c7f-4a67-4ef8-80c4-1c24f0269834"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.699963 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c1e12c7f-4a67-4ef8-80c4-1c24f0269834" (UID: "c1e12c7f-4a67-4ef8-80c4-1c24f0269834"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.700636 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c1e12c7f-4a67-4ef8-80c4-1c24f0269834" (UID: "c1e12c7f-4a67-4ef8-80c4-1c24f0269834"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.700737 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c1e12c7f-4a67-4ef8-80c4-1c24f0269834" (UID: "c1e12c7f-4a67-4ef8-80c4-1c24f0269834"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.700943 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c1e12c7f-4a67-4ef8-80c4-1c24f0269834" (UID: "c1e12c7f-4a67-4ef8-80c4-1c24f0269834"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.701648 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c1e12c7f-4a67-4ef8-80c4-1c24f0269834" (UID: "c1e12c7f-4a67-4ef8-80c4-1c24f0269834"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.706930 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-kube-api-access-cxgqw" (OuterVolumeSpecName: "kube-api-access-cxgqw") pod "c1e12c7f-4a67-4ef8-80c4-1c24f0269834" (UID: "c1e12c7f-4a67-4ef8-80c4-1c24f0269834"). InnerVolumeSpecName "kube-api-access-cxgqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.707111 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "c1e12c7f-4a67-4ef8-80c4-1c24f0269834" (UID: "c1e12c7f-4a67-4ef8-80c4-1c24f0269834"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.707256 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "c1e12c7f-4a67-4ef8-80c4-1c24f0269834" (UID: "c1e12c7f-4a67-4ef8-80c4-1c24f0269834"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.723333 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "c1e12c7f-4a67-4ef8-80c4-1c24f0269834" (UID: "c1e12c7f-4a67-4ef8-80c4-1c24f0269834"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.728599 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c1e12c7f-4a67-4ef8-80c4-1c24f0269834" (UID: "c1e12c7f-4a67-4ef8-80c4-1c24f0269834"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.731653 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-inventory" (OuterVolumeSpecName: "inventory") pod "c1e12c7f-4a67-4ef8-80c4-1c24f0269834" (UID: "c1e12c7f-4a67-4ef8-80c4-1c24f0269834"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.795061 4713 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.795250 4713 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.795342 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxgqw\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-kube-api-access-cxgqw\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.795420 4713 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.795473 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.795535 4713 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.795598 4713 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.795652 4713 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.795714 4713 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.795775 4713 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.795833 4713 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.795897 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.795955 4713 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:50 crc kubenswrapper[4713]: I0126 16:11:50.796014 4713 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c1e12c7f-4a67-4ef8-80c4-1c24f0269834-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.016294 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" event={"ID":"c1e12c7f-4a67-4ef8-80c4-1c24f0269834","Type":"ContainerDied","Data":"f2dad278ff2680127a1294bca803d194562f5b1bc6e5771768c2840a033ebe00"} Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.016356 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2dad278ff2680127a1294bca803d194562f5b1bc6e5771768c2840a033ebe00" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.016539 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.162914 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s"] Jan 26 16:11:51 crc kubenswrapper[4713]: E0126 16:11:51.163487 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25fb0191-4714-45aa-a64e-72ae0ab50bf5" containerName="extract-content" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.163506 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="25fb0191-4714-45aa-a64e-72ae0ab50bf5" containerName="extract-content" Jan 26 16:11:51 crc kubenswrapper[4713]: E0126 16:11:51.163527 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25fb0191-4714-45aa-a64e-72ae0ab50bf5" containerName="registry-server" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.163535 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="25fb0191-4714-45aa-a64e-72ae0ab50bf5" containerName="registry-server" Jan 26 16:11:51 crc kubenswrapper[4713]: E0126 16:11:51.163559 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e12c7f-4a67-4ef8-80c4-1c24f0269834" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.163568 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e12c7f-4a67-4ef8-80c4-1c24f0269834" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 16:11:51 crc kubenswrapper[4713]: E0126 16:11:51.163589 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25fb0191-4714-45aa-a64e-72ae0ab50bf5" containerName="extract-utilities" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.163597 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="25fb0191-4714-45aa-a64e-72ae0ab50bf5" containerName="extract-utilities" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.163859 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="25fb0191-4714-45aa-a64e-72ae0ab50bf5" containerName="registry-server" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.163900 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e12c7f-4a67-4ef8-80c4-1c24f0269834" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.164888 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.167969 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.169072 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.169814 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.170701 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.171460 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.177507 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s"] Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.310098 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g6c6s\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.310514 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g6c6s\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.310570 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g6c6s\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.310774 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q7h9\" (UniqueName: \"kubernetes.io/projected/b67e1167-2e6c-4061-a95f-61fed731f252-kube-api-access-9q7h9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g6c6s\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.310888 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b67e1167-2e6c-4061-a95f-61fed731f252-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g6c6s\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.412931 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q7h9\" (UniqueName: \"kubernetes.io/projected/b67e1167-2e6c-4061-a95f-61fed731f252-kube-api-access-9q7h9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g6c6s\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.412987 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b67e1167-2e6c-4061-a95f-61fed731f252-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g6c6s\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.413091 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g6c6s\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.413144 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g6c6s\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.413180 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g6c6s\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.414266 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b67e1167-2e6c-4061-a95f-61fed731f252-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g6c6s\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.416637 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g6c6s\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.417303 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g6c6s\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.418454 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g6c6s\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.440481 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q7h9\" (UniqueName: \"kubernetes.io/projected/b67e1167-2e6c-4061-a95f-61fed731f252-kube-api-access-9q7h9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g6c6s\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:51 crc kubenswrapper[4713]: I0126 16:11:51.507912 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:11:52 crc kubenswrapper[4713]: I0126 16:11:52.108915 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s"] Jan 26 16:11:53 crc kubenswrapper[4713]: I0126 16:11:53.037405 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" event={"ID":"b67e1167-2e6c-4061-a95f-61fed731f252","Type":"ContainerStarted","Data":"3c78887db73e34932728fa25191ab7e768dc3bb7d75d52fe7ae2f625c44522a3"} Jan 26 16:11:53 crc kubenswrapper[4713]: I0126 16:11:53.037855 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" event={"ID":"b67e1167-2e6c-4061-a95f-61fed731f252","Type":"ContainerStarted","Data":"4fa51802f68fd5265128baf33e6cf7d7432e68a74b42b6355c865b3f4081e799"} Jan 26 16:11:53 crc kubenswrapper[4713]: I0126 16:11:53.066278 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" podStartSLOduration=1.6611544870000001 podStartE2EDuration="2.066259284s" podCreationTimestamp="2026-01-26 16:11:51 +0000 UTC" firstStartedPulling="2026-01-26 16:11:52.108791063 +0000 UTC m=+2287.245808298" lastFinishedPulling="2026-01-26 16:11:52.51389586 +0000 UTC m=+2287.650913095" observedRunningTime="2026-01-26 16:11:53.053777429 +0000 UTC m=+2288.190794674" watchObservedRunningTime="2026-01-26 16:11:53.066259284 +0000 UTC m=+2288.203276519" Jan 26 16:12:03 crc kubenswrapper[4713]: I0126 16:12:03.301130 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:12:03 crc kubenswrapper[4713]: I0126 16:12:03.301649 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:12:03 crc kubenswrapper[4713]: I0126 16:12:03.301699 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 16:12:03 crc kubenswrapper[4713]: I0126 16:12:03.302654 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761"} pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:12:03 crc kubenswrapper[4713]: I0126 16:12:03.302724 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" containerID="cri-o://a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" gracePeriod=600 Jan 26 16:12:03 crc kubenswrapper[4713]: E0126 16:12:03.428729 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:12:04 crc kubenswrapper[4713]: I0126 16:12:04.187347 4713 generic.go:334] "Generic (PLEG): container finished" podID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" exitCode=0 Jan 26 16:12:04 crc kubenswrapper[4713]: I0126 16:12:04.187436 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerDied","Data":"a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761"} Jan 26 16:12:04 crc kubenswrapper[4713]: I0126 16:12:04.187773 4713 scope.go:117] "RemoveContainer" containerID="d29a66018f48ff881be9f0565fb5f6910353f457cae6af63f01c0a4b486c8fb4" Jan 26 16:12:04 crc kubenswrapper[4713]: I0126 16:12:04.188808 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:12:04 crc kubenswrapper[4713]: E0126 16:12:04.189463 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:12:16 crc kubenswrapper[4713]: I0126 16:12:16.804118 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:12:16 crc kubenswrapper[4713]: E0126 16:12:16.805451 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:12:27 crc kubenswrapper[4713]: I0126 16:12:27.803948 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:12:27 crc kubenswrapper[4713]: E0126 16:12:27.804912 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:12:38 crc kubenswrapper[4713]: I0126 16:12:38.804204 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:12:38 crc kubenswrapper[4713]: E0126 16:12:38.805576 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:12:53 crc kubenswrapper[4713]: I0126 16:12:53.803471 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:12:53 crc kubenswrapper[4713]: E0126 16:12:53.804222 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:13:01 crc kubenswrapper[4713]: I0126 16:13:01.917560 4713 generic.go:334] "Generic (PLEG): container finished" podID="b67e1167-2e6c-4061-a95f-61fed731f252" containerID="3c78887db73e34932728fa25191ab7e768dc3bb7d75d52fe7ae2f625c44522a3" exitCode=0 Jan 26 16:13:01 crc kubenswrapper[4713]: I0126 16:13:01.917647 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" event={"ID":"b67e1167-2e6c-4061-a95f-61fed731f252","Type":"ContainerDied","Data":"3c78887db73e34932728fa25191ab7e768dc3bb7d75d52fe7ae2f625c44522a3"} Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.426855 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.525625 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-ovn-combined-ca-bundle\") pod \"b67e1167-2e6c-4061-a95f-61fed731f252\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.525915 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-ssh-key-openstack-edpm-ipam\") pod \"b67e1167-2e6c-4061-a95f-61fed731f252\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.526129 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-inventory\") pod \"b67e1167-2e6c-4061-a95f-61fed731f252\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.526263 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q7h9\" (UniqueName: \"kubernetes.io/projected/b67e1167-2e6c-4061-a95f-61fed731f252-kube-api-access-9q7h9\") pod \"b67e1167-2e6c-4061-a95f-61fed731f252\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.526350 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b67e1167-2e6c-4061-a95f-61fed731f252-ovncontroller-config-0\") pod \"b67e1167-2e6c-4061-a95f-61fed731f252\" (UID: \"b67e1167-2e6c-4061-a95f-61fed731f252\") " Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.534124 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b67e1167-2e6c-4061-a95f-61fed731f252-kube-api-access-9q7h9" (OuterVolumeSpecName: "kube-api-access-9q7h9") pod "b67e1167-2e6c-4061-a95f-61fed731f252" (UID: "b67e1167-2e6c-4061-a95f-61fed731f252"). InnerVolumeSpecName "kube-api-access-9q7h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.534535 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b67e1167-2e6c-4061-a95f-61fed731f252" (UID: "b67e1167-2e6c-4061-a95f-61fed731f252"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.557466 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67e1167-2e6c-4061-a95f-61fed731f252-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "b67e1167-2e6c-4061-a95f-61fed731f252" (UID: "b67e1167-2e6c-4061-a95f-61fed731f252"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.567830 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-inventory" (OuterVolumeSpecName: "inventory") pod "b67e1167-2e6c-4061-a95f-61fed731f252" (UID: "b67e1167-2e6c-4061-a95f-61fed731f252"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.569938 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b67e1167-2e6c-4061-a95f-61fed731f252" (UID: "b67e1167-2e6c-4061-a95f-61fed731f252"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.628481 4713 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.628525 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.628540 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b67e1167-2e6c-4061-a95f-61fed731f252-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.628551 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q7h9\" (UniqueName: \"kubernetes.io/projected/b67e1167-2e6c-4061-a95f-61fed731f252-kube-api-access-9q7h9\") on node \"crc\" DevicePath \"\"" Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.628562 4713 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b67e1167-2e6c-4061-a95f-61fed731f252-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.938074 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" event={"ID":"b67e1167-2e6c-4061-a95f-61fed731f252","Type":"ContainerDied","Data":"4fa51802f68fd5265128baf33e6cf7d7432e68a74b42b6355c865b3f4081e799"} Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.938118 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fa51802f68fd5265128baf33e6cf7d7432e68a74b42b6355c865b3f4081e799" Jan 26 16:13:03 crc kubenswrapper[4713]: I0126 16:13:03.938130 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g6c6s" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.072404 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl"] Jan 26 16:13:04 crc kubenswrapper[4713]: E0126 16:13:04.072868 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67e1167-2e6c-4061-a95f-61fed731f252" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.072881 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67e1167-2e6c-4061-a95f-61fed731f252" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.073112 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="b67e1167-2e6c-4061-a95f-61fed731f252" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.074393 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.080435 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.080708 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.080829 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.081147 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.082119 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.083311 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.085044 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl"] Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.145318 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.145430 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.145471 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.145493 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbqhv\" (UniqueName: \"kubernetes.io/projected/409601d1-035c-435e-a892-4cb0a2f6760e-kube-api-access-wbqhv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.145676 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.145869 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.248246 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.248498 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.248599 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.248696 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.248740 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbqhv\" (UniqueName: \"kubernetes.io/projected/409601d1-035c-435e-a892-4cb0a2f6760e-kube-api-access-wbqhv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.248812 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.253855 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.256057 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.256160 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.256894 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.266435 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.285408 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbqhv\" (UniqueName: \"kubernetes.io/projected/409601d1-035c-435e-a892-4cb0a2f6760e-kube-api-access-wbqhv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.401875 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.958701 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl"] Jan 26 16:13:04 crc kubenswrapper[4713]: I0126 16:13:04.972505 4713 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:13:05 crc kubenswrapper[4713]: I0126 16:13:05.957693 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" event={"ID":"409601d1-035c-435e-a892-4cb0a2f6760e","Type":"ContainerStarted","Data":"d34a356fe11ca2dce970bb00482ee6280b62ebb602db7e04982bfc2a198eccf7"} Jan 26 16:13:05 crc kubenswrapper[4713]: I0126 16:13:05.957979 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" event={"ID":"409601d1-035c-435e-a892-4cb0a2f6760e","Type":"ContainerStarted","Data":"efde09a3bfe3ad16695186a0c2824f49c2276f5b54e2271b982077007997dee8"} Jan 26 16:13:06 crc kubenswrapper[4713]: I0126 16:13:06.804421 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:13:06 crc kubenswrapper[4713]: E0126 16:13:06.805752 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:13:20 crc kubenswrapper[4713]: I0126 16:13:20.803979 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:13:20 crc kubenswrapper[4713]: E0126 16:13:20.804792 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:13:31 crc kubenswrapper[4713]: I0126 16:13:31.804277 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:13:31 crc kubenswrapper[4713]: E0126 16:13:31.806966 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:13:45 crc kubenswrapper[4713]: I0126 16:13:45.827491 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:13:45 crc kubenswrapper[4713]: E0126 16:13:45.828342 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:13:58 crc kubenswrapper[4713]: I0126 16:13:58.557496 4713 generic.go:334] "Generic (PLEG): container finished" podID="409601d1-035c-435e-a892-4cb0a2f6760e" containerID="d34a356fe11ca2dce970bb00482ee6280b62ebb602db7e04982bfc2a198eccf7" exitCode=0 Jan 26 16:13:58 crc kubenswrapper[4713]: I0126 16:13:58.557567 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" event={"ID":"409601d1-035c-435e-a892-4cb0a2f6760e","Type":"ContainerDied","Data":"d34a356fe11ca2dce970bb00482ee6280b62ebb602db7e04982bfc2a198eccf7"} Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.164543 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.260433 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-inventory\") pod \"409601d1-035c-435e-a892-4cb0a2f6760e\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.260642 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbqhv\" (UniqueName: \"kubernetes.io/projected/409601d1-035c-435e-a892-4cb0a2f6760e-kube-api-access-wbqhv\") pod \"409601d1-035c-435e-a892-4cb0a2f6760e\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.260687 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-nova-metadata-neutron-config-0\") pod \"409601d1-035c-435e-a892-4cb0a2f6760e\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.260722 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-ssh-key-openstack-edpm-ipam\") pod \"409601d1-035c-435e-a892-4cb0a2f6760e\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.260781 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"409601d1-035c-435e-a892-4cb0a2f6760e\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.260868 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-neutron-metadata-combined-ca-bundle\") pod \"409601d1-035c-435e-a892-4cb0a2f6760e\" (UID: \"409601d1-035c-435e-a892-4cb0a2f6760e\") " Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.267751 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "409601d1-035c-435e-a892-4cb0a2f6760e" (UID: "409601d1-035c-435e-a892-4cb0a2f6760e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.273172 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/409601d1-035c-435e-a892-4cb0a2f6760e-kube-api-access-wbqhv" (OuterVolumeSpecName: "kube-api-access-wbqhv") pod "409601d1-035c-435e-a892-4cb0a2f6760e" (UID: "409601d1-035c-435e-a892-4cb0a2f6760e"). InnerVolumeSpecName "kube-api-access-wbqhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.363566 4713 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.364140 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbqhv\" (UniqueName: \"kubernetes.io/projected/409601d1-035c-435e-a892-4cb0a2f6760e-kube-api-access-wbqhv\") on node \"crc\" DevicePath \"\"" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.365486 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-inventory" (OuterVolumeSpecName: "inventory") pod "409601d1-035c-435e-a892-4cb0a2f6760e" (UID: "409601d1-035c-435e-a892-4cb0a2f6760e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.376559 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "409601d1-035c-435e-a892-4cb0a2f6760e" (UID: "409601d1-035c-435e-a892-4cb0a2f6760e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.386790 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "409601d1-035c-435e-a892-4cb0a2f6760e" (UID: "409601d1-035c-435e-a892-4cb0a2f6760e"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.405615 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "409601d1-035c-435e-a892-4cb0a2f6760e" (UID: "409601d1-035c-435e-a892-4cb0a2f6760e"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.467044 4713 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.467105 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.467123 4713 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.467138 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/409601d1-035c-435e-a892-4cb0a2f6760e-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.580637 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" event={"ID":"409601d1-035c-435e-a892-4cb0a2f6760e","Type":"ContainerDied","Data":"efde09a3bfe3ad16695186a0c2824f49c2276f5b54e2271b982077007997dee8"} Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.580682 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efde09a3bfe3ad16695186a0c2824f49c2276f5b54e2271b982077007997dee8" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.580696 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.749887 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8"] Jan 26 16:14:00 crc kubenswrapper[4713]: E0126 16:14:00.750437 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409601d1-035c-435e-a892-4cb0a2f6760e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.750458 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="409601d1-035c-435e-a892-4cb0a2f6760e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.750678 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="409601d1-035c-435e-a892-4cb0a2f6760e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.751487 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.753834 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.754161 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.755151 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.755736 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.756102 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.757942 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8"] Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.804269 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:14:00 crc kubenswrapper[4713]: E0126 16:14:00.804580 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.875437 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgkjk\" (UniqueName: \"kubernetes.io/projected/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-kube-api-access-bgkjk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.875686 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.875764 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.875877 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.875928 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.978203 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.978316 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.978383 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.979188 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgkjk\" (UniqueName: \"kubernetes.io/projected/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-kube-api-access-bgkjk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.979329 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.985322 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.985588 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.985673 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:00 crc kubenswrapper[4713]: I0126 16:14:00.985806 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:01 crc kubenswrapper[4713]: I0126 16:14:01.002542 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgkjk\" (UniqueName: \"kubernetes.io/projected/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-kube-api-access-bgkjk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:01 crc kubenswrapper[4713]: I0126 16:14:01.071216 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:14:01 crc kubenswrapper[4713]: I0126 16:14:01.602980 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8"] Jan 26 16:14:02 crc kubenswrapper[4713]: I0126 16:14:02.597013 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" event={"ID":"ab00c6e0-12fb-4e99-be6b-ca341fbfb235","Type":"ContainerStarted","Data":"e5fe86195781f1f9bc37b864cde835f3be1b0cb1db4765a054355ed56ad88635"} Jan 26 16:14:03 crc kubenswrapper[4713]: I0126 16:14:03.607723 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" event={"ID":"ab00c6e0-12fb-4e99-be6b-ca341fbfb235","Type":"ContainerStarted","Data":"d5789e90660481152388d5d62de520db2ae8572dc643a5527ae5557e05cda975"} Jan 26 16:14:03 crc kubenswrapper[4713]: I0126 16:14:03.630295 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" podStartSLOduration=2.004993565 podStartE2EDuration="3.630276652s" podCreationTimestamp="2026-01-26 16:14:00 +0000 UTC" firstStartedPulling="2026-01-26 16:14:01.613049661 +0000 UTC m=+2416.750066906" lastFinishedPulling="2026-01-26 16:14:03.238332748 +0000 UTC m=+2418.375349993" observedRunningTime="2026-01-26 16:14:03.621027859 +0000 UTC m=+2418.758045094" watchObservedRunningTime="2026-01-26 16:14:03.630276652 +0000 UTC m=+2418.767293887" Jan 26 16:14:14 crc kubenswrapper[4713]: I0126 16:14:14.804066 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:14:14 crc kubenswrapper[4713]: E0126 16:14:14.804972 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:14:27 crc kubenswrapper[4713]: I0126 16:14:27.805295 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:14:27 crc kubenswrapper[4713]: E0126 16:14:27.806431 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:14:38 crc kubenswrapper[4713]: I0126 16:14:38.805630 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:14:38 crc kubenswrapper[4713]: E0126 16:14:38.807257 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:14:49 crc kubenswrapper[4713]: I0126 16:14:49.805104 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:14:49 crc kubenswrapper[4713]: E0126 16:14:49.807451 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.160406 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j"] Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.163276 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.166952 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.170071 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j"] Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.170941 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.344974 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17288c2c-1c24-4017-8443-5dc2615b7e72-config-volume\") pod \"collect-profiles-29490735-c488j\" (UID: \"17288c2c-1c24-4017-8443-5dc2615b7e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.345198 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17288c2c-1c24-4017-8443-5dc2615b7e72-secret-volume\") pod \"collect-profiles-29490735-c488j\" (UID: \"17288c2c-1c24-4017-8443-5dc2615b7e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.345236 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbdh7\" (UniqueName: \"kubernetes.io/projected/17288c2c-1c24-4017-8443-5dc2615b7e72-kube-api-access-hbdh7\") pod \"collect-profiles-29490735-c488j\" (UID: \"17288c2c-1c24-4017-8443-5dc2615b7e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.447082 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17288c2c-1c24-4017-8443-5dc2615b7e72-config-volume\") pod \"collect-profiles-29490735-c488j\" (UID: \"17288c2c-1c24-4017-8443-5dc2615b7e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.447213 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17288c2c-1c24-4017-8443-5dc2615b7e72-secret-volume\") pod \"collect-profiles-29490735-c488j\" (UID: \"17288c2c-1c24-4017-8443-5dc2615b7e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.447237 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbdh7\" (UniqueName: \"kubernetes.io/projected/17288c2c-1c24-4017-8443-5dc2615b7e72-kube-api-access-hbdh7\") pod \"collect-profiles-29490735-c488j\" (UID: \"17288c2c-1c24-4017-8443-5dc2615b7e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.448158 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17288c2c-1c24-4017-8443-5dc2615b7e72-config-volume\") pod \"collect-profiles-29490735-c488j\" (UID: \"17288c2c-1c24-4017-8443-5dc2615b7e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.455190 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17288c2c-1c24-4017-8443-5dc2615b7e72-secret-volume\") pod \"collect-profiles-29490735-c488j\" (UID: \"17288c2c-1c24-4017-8443-5dc2615b7e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.476600 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbdh7\" (UniqueName: \"kubernetes.io/projected/17288c2c-1c24-4017-8443-5dc2615b7e72-kube-api-access-hbdh7\") pod \"collect-profiles-29490735-c488j\" (UID: \"17288c2c-1c24-4017-8443-5dc2615b7e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.484500 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" Jan 26 16:15:00 crc kubenswrapper[4713]: I0126 16:15:00.945165 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j"] Jan 26 16:15:00 crc kubenswrapper[4713]: W0126 16:15:00.951739 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17288c2c_1c24_4017_8443_5dc2615b7e72.slice/crio-7bf490d990c3b212adfd48ba007888d07381624b48b51fa69359b7a6e8a81891 WatchSource:0}: Error finding container 7bf490d990c3b212adfd48ba007888d07381624b48b51fa69359b7a6e8a81891: Status 404 returned error can't find the container with id 7bf490d990c3b212adfd48ba007888d07381624b48b51fa69359b7a6e8a81891 Jan 26 16:15:01 crc kubenswrapper[4713]: I0126 16:15:01.589028 4713 generic.go:334] "Generic (PLEG): container finished" podID="17288c2c-1c24-4017-8443-5dc2615b7e72" containerID="d01d423c2e990c0e4b080e32a4d09810d18afd71bb361162618aaaa4d940e10a" exitCode=0 Jan 26 16:15:01 crc kubenswrapper[4713]: I0126 16:15:01.589066 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" event={"ID":"17288c2c-1c24-4017-8443-5dc2615b7e72","Type":"ContainerDied","Data":"d01d423c2e990c0e4b080e32a4d09810d18afd71bb361162618aaaa4d940e10a"} Jan 26 16:15:01 crc kubenswrapper[4713]: I0126 16:15:01.589326 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" event={"ID":"17288c2c-1c24-4017-8443-5dc2615b7e72","Type":"ContainerStarted","Data":"7bf490d990c3b212adfd48ba007888d07381624b48b51fa69359b7a6e8a81891"} Jan 26 16:15:01 crc kubenswrapper[4713]: I0126 16:15:01.803891 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:15:01 crc kubenswrapper[4713]: E0126 16:15:01.804263 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:15:03 crc kubenswrapper[4713]: I0126 16:15:03.030863 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" Jan 26 16:15:03 crc kubenswrapper[4713]: I0126 16:15:03.109517 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17288c2c-1c24-4017-8443-5dc2615b7e72-config-volume\") pod \"17288c2c-1c24-4017-8443-5dc2615b7e72\" (UID: \"17288c2c-1c24-4017-8443-5dc2615b7e72\") " Jan 26 16:15:03 crc kubenswrapper[4713]: I0126 16:15:03.109692 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17288c2c-1c24-4017-8443-5dc2615b7e72-secret-volume\") pod \"17288c2c-1c24-4017-8443-5dc2615b7e72\" (UID: \"17288c2c-1c24-4017-8443-5dc2615b7e72\") " Jan 26 16:15:03 crc kubenswrapper[4713]: I0126 16:15:03.109723 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbdh7\" (UniqueName: \"kubernetes.io/projected/17288c2c-1c24-4017-8443-5dc2615b7e72-kube-api-access-hbdh7\") pod \"17288c2c-1c24-4017-8443-5dc2615b7e72\" (UID: \"17288c2c-1c24-4017-8443-5dc2615b7e72\") " Jan 26 16:15:03 crc kubenswrapper[4713]: I0126 16:15:03.110411 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17288c2c-1c24-4017-8443-5dc2615b7e72-config-volume" (OuterVolumeSpecName: "config-volume") pod "17288c2c-1c24-4017-8443-5dc2615b7e72" (UID: "17288c2c-1c24-4017-8443-5dc2615b7e72"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:15:03 crc kubenswrapper[4713]: I0126 16:15:03.115137 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17288c2c-1c24-4017-8443-5dc2615b7e72-kube-api-access-hbdh7" (OuterVolumeSpecName: "kube-api-access-hbdh7") pod "17288c2c-1c24-4017-8443-5dc2615b7e72" (UID: "17288c2c-1c24-4017-8443-5dc2615b7e72"). InnerVolumeSpecName "kube-api-access-hbdh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:15:03 crc kubenswrapper[4713]: I0126 16:15:03.115633 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17288c2c-1c24-4017-8443-5dc2615b7e72-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "17288c2c-1c24-4017-8443-5dc2615b7e72" (UID: "17288c2c-1c24-4017-8443-5dc2615b7e72"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:15:03 crc kubenswrapper[4713]: I0126 16:15:03.212454 4713 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17288c2c-1c24-4017-8443-5dc2615b7e72-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:03 crc kubenswrapper[4713]: I0126 16:15:03.212484 4713 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17288c2c-1c24-4017-8443-5dc2615b7e72-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:03 crc kubenswrapper[4713]: I0126 16:15:03.212495 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbdh7\" (UniqueName: \"kubernetes.io/projected/17288c2c-1c24-4017-8443-5dc2615b7e72-kube-api-access-hbdh7\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:03 crc kubenswrapper[4713]: I0126 16:15:03.608795 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" event={"ID":"17288c2c-1c24-4017-8443-5dc2615b7e72","Type":"ContainerDied","Data":"7bf490d990c3b212adfd48ba007888d07381624b48b51fa69359b7a6e8a81891"} Jan 26 16:15:03 crc kubenswrapper[4713]: I0126 16:15:03.609276 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bf490d990c3b212adfd48ba007888d07381624b48b51fa69359b7a6e8a81891" Jan 26 16:15:03 crc kubenswrapper[4713]: I0126 16:15:03.608854 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-c488j" Jan 26 16:15:04 crc kubenswrapper[4713]: I0126 16:15:04.103322 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb"] Jan 26 16:15:04 crc kubenswrapper[4713]: I0126 16:15:04.114294 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-wnmzb"] Jan 26 16:15:05 crc kubenswrapper[4713]: I0126 16:15:05.814998 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0afe1ab0-3817-4d66-aaf9-e99181ae0a55" path="/var/lib/kubelet/pods/0afe1ab0-3817-4d66-aaf9-e99181ae0a55/volumes" Jan 26 16:15:16 crc kubenswrapper[4713]: I0126 16:15:16.804062 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:15:16 crc kubenswrapper[4713]: E0126 16:15:16.805404 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.296465 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t54kl"] Jan 26 16:15:20 crc kubenswrapper[4713]: E0126 16:15:20.297669 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17288c2c-1c24-4017-8443-5dc2615b7e72" containerName="collect-profiles" Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.297687 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="17288c2c-1c24-4017-8443-5dc2615b7e72" containerName="collect-profiles" Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.297935 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="17288c2c-1c24-4017-8443-5dc2615b7e72" containerName="collect-profiles" Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.300198 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.312942 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t54kl"] Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.445597 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmwfh\" (UniqueName: \"kubernetes.io/projected/37fca54b-176c-4616-98df-6156cb4a066b-kube-api-access-vmwfh\") pod \"certified-operators-t54kl\" (UID: \"37fca54b-176c-4616-98df-6156cb4a066b\") " pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.445661 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fca54b-176c-4616-98df-6156cb4a066b-catalog-content\") pod \"certified-operators-t54kl\" (UID: \"37fca54b-176c-4616-98df-6156cb4a066b\") " pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.445738 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fca54b-176c-4616-98df-6156cb4a066b-utilities\") pod \"certified-operators-t54kl\" (UID: \"37fca54b-176c-4616-98df-6156cb4a066b\") " pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.548139 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmwfh\" (UniqueName: \"kubernetes.io/projected/37fca54b-176c-4616-98df-6156cb4a066b-kube-api-access-vmwfh\") pod \"certified-operators-t54kl\" (UID: \"37fca54b-176c-4616-98df-6156cb4a066b\") " pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.548214 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fca54b-176c-4616-98df-6156cb4a066b-catalog-content\") pod \"certified-operators-t54kl\" (UID: \"37fca54b-176c-4616-98df-6156cb4a066b\") " pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.548309 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fca54b-176c-4616-98df-6156cb4a066b-utilities\") pod \"certified-operators-t54kl\" (UID: \"37fca54b-176c-4616-98df-6156cb4a066b\") " pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.548926 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fca54b-176c-4616-98df-6156cb4a066b-utilities\") pod \"certified-operators-t54kl\" (UID: \"37fca54b-176c-4616-98df-6156cb4a066b\") " pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.550138 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fca54b-176c-4616-98df-6156cb4a066b-catalog-content\") pod \"certified-operators-t54kl\" (UID: \"37fca54b-176c-4616-98df-6156cb4a066b\") " pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.569663 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmwfh\" (UniqueName: \"kubernetes.io/projected/37fca54b-176c-4616-98df-6156cb4a066b-kube-api-access-vmwfh\") pod \"certified-operators-t54kl\" (UID: \"37fca54b-176c-4616-98df-6156cb4a066b\") " pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.659072 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:20 crc kubenswrapper[4713]: I0126 16:15:20.762892 4713 scope.go:117] "RemoveContainer" containerID="174210d4dea3f0d359ad2fe2b7bd2ebb30c4dcf484dee93dfcbd5d19b469de0f" Jan 26 16:15:21 crc kubenswrapper[4713]: I0126 16:15:21.235297 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t54kl"] Jan 26 16:15:21 crc kubenswrapper[4713]: I0126 16:15:21.801791 4713 generic.go:334] "Generic (PLEG): container finished" podID="37fca54b-176c-4616-98df-6156cb4a066b" containerID="b70cfb1d342081bb9f9dbc0795fcdbb0230f6542890540b5ea55c0df9919b141" exitCode=0 Jan 26 16:15:21 crc kubenswrapper[4713]: I0126 16:15:21.801889 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t54kl" event={"ID":"37fca54b-176c-4616-98df-6156cb4a066b","Type":"ContainerDied","Data":"b70cfb1d342081bb9f9dbc0795fcdbb0230f6542890540b5ea55c0df9919b141"} Jan 26 16:15:21 crc kubenswrapper[4713]: I0126 16:15:21.801970 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t54kl" event={"ID":"37fca54b-176c-4616-98df-6156cb4a066b","Type":"ContainerStarted","Data":"a3b4111ea1909c6d81c51161acb64ac6c9497e063ead6ce317c9b23230aa49cd"} Jan 26 16:15:23 crc kubenswrapper[4713]: I0126 16:15:23.823421 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t54kl" event={"ID":"37fca54b-176c-4616-98df-6156cb4a066b","Type":"ContainerStarted","Data":"fa80234cfccf5a5ad4f883c73a0c8913367afdcc94b8569a7c7887e7ce09fb8f"} Jan 26 16:15:24 crc kubenswrapper[4713]: I0126 16:15:24.840674 4713 generic.go:334] "Generic (PLEG): container finished" podID="37fca54b-176c-4616-98df-6156cb4a066b" containerID="fa80234cfccf5a5ad4f883c73a0c8913367afdcc94b8569a7c7887e7ce09fb8f" exitCode=0 Jan 26 16:15:24 crc kubenswrapper[4713]: I0126 16:15:24.840766 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t54kl" event={"ID":"37fca54b-176c-4616-98df-6156cb4a066b","Type":"ContainerDied","Data":"fa80234cfccf5a5ad4f883c73a0c8913367afdcc94b8569a7c7887e7ce09fb8f"} Jan 26 16:15:25 crc kubenswrapper[4713]: I0126 16:15:25.853055 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t54kl" event={"ID":"37fca54b-176c-4616-98df-6156cb4a066b","Type":"ContainerStarted","Data":"9ea273633ec980a6b05f36363b4fe7e285a12255b1ea7786c7dd258d046afc7e"} Jan 26 16:15:25 crc kubenswrapper[4713]: I0126 16:15:25.885793 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t54kl" podStartSLOduration=2.394292124 podStartE2EDuration="5.885771258s" podCreationTimestamp="2026-01-26 16:15:20 +0000 UTC" firstStartedPulling="2026-01-26 16:15:21.805553703 +0000 UTC m=+2496.942570978" lastFinishedPulling="2026-01-26 16:15:25.297032857 +0000 UTC m=+2500.434050112" observedRunningTime="2026-01-26 16:15:25.87882628 +0000 UTC m=+2501.015843545" watchObservedRunningTime="2026-01-26 16:15:25.885771258 +0000 UTC m=+2501.022788503" Jan 26 16:15:29 crc kubenswrapper[4713]: I0126 16:15:29.805742 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:15:29 crc kubenswrapper[4713]: E0126 16:15:29.806817 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:15:30 crc kubenswrapper[4713]: I0126 16:15:30.659351 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:30 crc kubenswrapper[4713]: I0126 16:15:30.659592 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:30 crc kubenswrapper[4713]: I0126 16:15:30.717871 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:31 crc kubenswrapper[4713]: I0126 16:15:31.001919 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:31 crc kubenswrapper[4713]: I0126 16:15:31.077947 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t54kl"] Jan 26 16:15:32 crc kubenswrapper[4713]: I0126 16:15:32.939578 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t54kl" podUID="37fca54b-176c-4616-98df-6156cb4a066b" containerName="registry-server" containerID="cri-o://9ea273633ec980a6b05f36363b4fe7e285a12255b1ea7786c7dd258d046afc7e" gracePeriod=2 Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.520556 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.646301 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fca54b-176c-4616-98df-6156cb4a066b-catalog-content\") pod \"37fca54b-176c-4616-98df-6156cb4a066b\" (UID: \"37fca54b-176c-4616-98df-6156cb4a066b\") " Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.646454 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fca54b-176c-4616-98df-6156cb4a066b-utilities\") pod \"37fca54b-176c-4616-98df-6156cb4a066b\" (UID: \"37fca54b-176c-4616-98df-6156cb4a066b\") " Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.646605 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmwfh\" (UniqueName: \"kubernetes.io/projected/37fca54b-176c-4616-98df-6156cb4a066b-kube-api-access-vmwfh\") pod \"37fca54b-176c-4616-98df-6156cb4a066b\" (UID: \"37fca54b-176c-4616-98df-6156cb4a066b\") " Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.647737 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37fca54b-176c-4616-98df-6156cb4a066b-utilities" (OuterVolumeSpecName: "utilities") pod "37fca54b-176c-4616-98df-6156cb4a066b" (UID: "37fca54b-176c-4616-98df-6156cb4a066b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.658838 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37fca54b-176c-4616-98df-6156cb4a066b-kube-api-access-vmwfh" (OuterVolumeSpecName: "kube-api-access-vmwfh") pod "37fca54b-176c-4616-98df-6156cb4a066b" (UID: "37fca54b-176c-4616-98df-6156cb4a066b"). InnerVolumeSpecName "kube-api-access-vmwfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.692057 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37fca54b-176c-4616-98df-6156cb4a066b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37fca54b-176c-4616-98df-6156cb4a066b" (UID: "37fca54b-176c-4616-98df-6156cb4a066b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.749539 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fca54b-176c-4616-98df-6156cb4a066b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.749870 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fca54b-176c-4616-98df-6156cb4a066b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.750021 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmwfh\" (UniqueName: \"kubernetes.io/projected/37fca54b-176c-4616-98df-6156cb4a066b-kube-api-access-vmwfh\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.953225 4713 generic.go:334] "Generic (PLEG): container finished" podID="37fca54b-176c-4616-98df-6156cb4a066b" containerID="9ea273633ec980a6b05f36363b4fe7e285a12255b1ea7786c7dd258d046afc7e" exitCode=0 Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.953271 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t54kl" event={"ID":"37fca54b-176c-4616-98df-6156cb4a066b","Type":"ContainerDied","Data":"9ea273633ec980a6b05f36363b4fe7e285a12255b1ea7786c7dd258d046afc7e"} Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.953324 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t54kl" event={"ID":"37fca54b-176c-4616-98df-6156cb4a066b","Type":"ContainerDied","Data":"a3b4111ea1909c6d81c51161acb64ac6c9497e063ead6ce317c9b23230aa49cd"} Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.953343 4713 scope.go:117] "RemoveContainer" containerID="9ea273633ec980a6b05f36363b4fe7e285a12255b1ea7786c7dd258d046afc7e" Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.953349 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t54kl" Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.985090 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t54kl"] Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.989990 4713 scope.go:117] "RemoveContainer" containerID="fa80234cfccf5a5ad4f883c73a0c8913367afdcc94b8569a7c7887e7ce09fb8f" Jan 26 16:15:33 crc kubenswrapper[4713]: I0126 16:15:33.998688 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t54kl"] Jan 26 16:15:34 crc kubenswrapper[4713]: I0126 16:15:34.021007 4713 scope.go:117] "RemoveContainer" containerID="b70cfb1d342081bb9f9dbc0795fcdbb0230f6542890540b5ea55c0df9919b141" Jan 26 16:15:34 crc kubenswrapper[4713]: I0126 16:15:34.089283 4713 scope.go:117] "RemoveContainer" containerID="9ea273633ec980a6b05f36363b4fe7e285a12255b1ea7786c7dd258d046afc7e" Jan 26 16:15:34 crc kubenswrapper[4713]: E0126 16:15:34.090131 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ea273633ec980a6b05f36363b4fe7e285a12255b1ea7786c7dd258d046afc7e\": container with ID starting with 9ea273633ec980a6b05f36363b4fe7e285a12255b1ea7786c7dd258d046afc7e not found: ID does not exist" containerID="9ea273633ec980a6b05f36363b4fe7e285a12255b1ea7786c7dd258d046afc7e" Jan 26 16:15:34 crc kubenswrapper[4713]: I0126 16:15:34.090195 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ea273633ec980a6b05f36363b4fe7e285a12255b1ea7786c7dd258d046afc7e"} err="failed to get container status \"9ea273633ec980a6b05f36363b4fe7e285a12255b1ea7786c7dd258d046afc7e\": rpc error: code = NotFound desc = could not find container \"9ea273633ec980a6b05f36363b4fe7e285a12255b1ea7786c7dd258d046afc7e\": container with ID starting with 9ea273633ec980a6b05f36363b4fe7e285a12255b1ea7786c7dd258d046afc7e not found: ID does not exist" Jan 26 16:15:34 crc kubenswrapper[4713]: I0126 16:15:34.090231 4713 scope.go:117] "RemoveContainer" containerID="fa80234cfccf5a5ad4f883c73a0c8913367afdcc94b8569a7c7887e7ce09fb8f" Jan 26 16:15:34 crc kubenswrapper[4713]: E0126 16:15:34.090999 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa80234cfccf5a5ad4f883c73a0c8913367afdcc94b8569a7c7887e7ce09fb8f\": container with ID starting with fa80234cfccf5a5ad4f883c73a0c8913367afdcc94b8569a7c7887e7ce09fb8f not found: ID does not exist" containerID="fa80234cfccf5a5ad4f883c73a0c8913367afdcc94b8569a7c7887e7ce09fb8f" Jan 26 16:15:34 crc kubenswrapper[4713]: I0126 16:15:34.091062 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa80234cfccf5a5ad4f883c73a0c8913367afdcc94b8569a7c7887e7ce09fb8f"} err="failed to get container status \"fa80234cfccf5a5ad4f883c73a0c8913367afdcc94b8569a7c7887e7ce09fb8f\": rpc error: code = NotFound desc = could not find container \"fa80234cfccf5a5ad4f883c73a0c8913367afdcc94b8569a7c7887e7ce09fb8f\": container with ID starting with fa80234cfccf5a5ad4f883c73a0c8913367afdcc94b8569a7c7887e7ce09fb8f not found: ID does not exist" Jan 26 16:15:34 crc kubenswrapper[4713]: I0126 16:15:34.091099 4713 scope.go:117] "RemoveContainer" containerID="b70cfb1d342081bb9f9dbc0795fcdbb0230f6542890540b5ea55c0df9919b141" Jan 26 16:15:34 crc kubenswrapper[4713]: E0126 16:15:34.091513 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b70cfb1d342081bb9f9dbc0795fcdbb0230f6542890540b5ea55c0df9919b141\": container with ID starting with b70cfb1d342081bb9f9dbc0795fcdbb0230f6542890540b5ea55c0df9919b141 not found: ID does not exist" containerID="b70cfb1d342081bb9f9dbc0795fcdbb0230f6542890540b5ea55c0df9919b141" Jan 26 16:15:34 crc kubenswrapper[4713]: I0126 16:15:34.091547 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b70cfb1d342081bb9f9dbc0795fcdbb0230f6542890540b5ea55c0df9919b141"} err="failed to get container status \"b70cfb1d342081bb9f9dbc0795fcdbb0230f6542890540b5ea55c0df9919b141\": rpc error: code = NotFound desc = could not find container \"b70cfb1d342081bb9f9dbc0795fcdbb0230f6542890540b5ea55c0df9919b141\": container with ID starting with b70cfb1d342081bb9f9dbc0795fcdbb0230f6542890540b5ea55c0df9919b141 not found: ID does not exist" Jan 26 16:15:35 crc kubenswrapper[4713]: I0126 16:15:35.813881 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37fca54b-176c-4616-98df-6156cb4a066b" path="/var/lib/kubelet/pods/37fca54b-176c-4616-98df-6156cb4a066b/volumes" Jan 26 16:15:43 crc kubenswrapper[4713]: I0126 16:15:43.803773 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:15:43 crc kubenswrapper[4713]: E0126 16:15:43.804530 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:15:55 crc kubenswrapper[4713]: I0126 16:15:55.808968 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:15:55 crc kubenswrapper[4713]: E0126 16:15:55.809599 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:16:06 crc kubenswrapper[4713]: I0126 16:16:06.803940 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:16:06 crc kubenswrapper[4713]: E0126 16:16:06.805098 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:16:20 crc kubenswrapper[4713]: I0126 16:16:20.804663 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:16:20 crc kubenswrapper[4713]: E0126 16:16:20.805931 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:16:33 crc kubenswrapper[4713]: I0126 16:16:33.810126 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:16:33 crc kubenswrapper[4713]: E0126 16:16:33.811208 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:16:48 crc kubenswrapper[4713]: I0126 16:16:48.803897 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:16:48 crc kubenswrapper[4713]: E0126 16:16:48.804727 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:17:01 crc kubenswrapper[4713]: I0126 16:17:01.804749 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:17:01 crc kubenswrapper[4713]: E0126 16:17:01.805640 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:17:12 crc kubenswrapper[4713]: I0126 16:17:12.803326 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:17:14 crc kubenswrapper[4713]: I0126 16:17:14.089185 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"4d03474c3ce9ee80cf039013a706e0db548f7a66785997b0ed513ed768260d0f"} Jan 26 16:18:13 crc kubenswrapper[4713]: E0126 16:18:13.939210 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab00c6e0_12fb_4e99_be6b_ca341fbfb235.slice/crio-d5789e90660481152388d5d62de520db2ae8572dc643a5527ae5557e05cda975.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:18:14 crc kubenswrapper[4713]: I0126 16:18:14.766026 4713 generic.go:334] "Generic (PLEG): container finished" podID="ab00c6e0-12fb-4e99-be6b-ca341fbfb235" containerID="d5789e90660481152388d5d62de520db2ae8572dc643a5527ae5557e05cda975" exitCode=0 Jan 26 16:18:14 crc kubenswrapper[4713]: I0126 16:18:14.766114 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" event={"ID":"ab00c6e0-12fb-4e99-be6b-ca341fbfb235","Type":"ContainerDied","Data":"d5789e90660481152388d5d62de520db2ae8572dc643a5527ae5557e05cda975"} Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.339282 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.518845 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-inventory\") pod \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.519028 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-libvirt-combined-ca-bundle\") pod \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.519088 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-ssh-key-openstack-edpm-ipam\") pod \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.519270 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgkjk\" (UniqueName: \"kubernetes.io/projected/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-kube-api-access-bgkjk\") pod \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.519355 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-libvirt-secret-0\") pod \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\" (UID: \"ab00c6e0-12fb-4e99-be6b-ca341fbfb235\") " Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.525228 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-kube-api-access-bgkjk" (OuterVolumeSpecName: "kube-api-access-bgkjk") pod "ab00c6e0-12fb-4e99-be6b-ca341fbfb235" (UID: "ab00c6e0-12fb-4e99-be6b-ca341fbfb235"). InnerVolumeSpecName "kube-api-access-bgkjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.532855 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ab00c6e0-12fb-4e99-be6b-ca341fbfb235" (UID: "ab00c6e0-12fb-4e99-be6b-ca341fbfb235"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.555453 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "ab00c6e0-12fb-4e99-be6b-ca341fbfb235" (UID: "ab00c6e0-12fb-4e99-be6b-ca341fbfb235"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.558908 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ab00c6e0-12fb-4e99-be6b-ca341fbfb235" (UID: "ab00c6e0-12fb-4e99-be6b-ca341fbfb235"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.564119 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-inventory" (OuterVolumeSpecName: "inventory") pod "ab00c6e0-12fb-4e99-be6b-ca341fbfb235" (UID: "ab00c6e0-12fb-4e99-be6b-ca341fbfb235"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.623607 4713 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.623914 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.623981 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgkjk\" (UniqueName: \"kubernetes.io/projected/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-kube-api-access-bgkjk\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.624037 4713 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.624090 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab00c6e0-12fb-4e99-be6b-ca341fbfb235-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.793431 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" event={"ID":"ab00c6e0-12fb-4e99-be6b-ca341fbfb235","Type":"ContainerDied","Data":"e5fe86195781f1f9bc37b864cde835f3be1b0cb1db4765a054355ed56ad88635"} Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.793529 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5fe86195781f1f9bc37b864cde835f3be1b0cb1db4765a054355ed56ad88635" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.793642 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.899981 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg"] Jan 26 16:18:16 crc kubenswrapper[4713]: E0126 16:18:16.900842 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fca54b-176c-4616-98df-6156cb4a066b" containerName="registry-server" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.900882 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fca54b-176c-4616-98df-6156cb4a066b" containerName="registry-server" Jan 26 16:18:16 crc kubenswrapper[4713]: E0126 16:18:16.900904 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fca54b-176c-4616-98df-6156cb4a066b" containerName="extract-utilities" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.900914 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fca54b-176c-4616-98df-6156cb4a066b" containerName="extract-utilities" Jan 26 16:18:16 crc kubenswrapper[4713]: E0126 16:18:16.900925 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fca54b-176c-4616-98df-6156cb4a066b" containerName="extract-content" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.900935 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fca54b-176c-4616-98df-6156cb4a066b" containerName="extract-content" Jan 26 16:18:16 crc kubenswrapper[4713]: E0126 16:18:16.900969 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab00c6e0-12fb-4e99-be6b-ca341fbfb235" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.900980 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab00c6e0-12fb-4e99-be6b-ca341fbfb235" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.901230 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab00c6e0-12fb-4e99-be6b-ca341fbfb235" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.901248 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="37fca54b-176c-4616-98df-6156cb4a066b" containerName="registry-server" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.902604 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.907723 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.907810 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.907810 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.908107 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.908726 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.908779 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.908860 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:18:16 crc kubenswrapper[4713]: I0126 16:18:16.912744 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg"] Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.032023 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.032083 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.032117 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.032293 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.032708 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.032865 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.032984 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.033052 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2d4j\" (UniqueName: \"kubernetes.io/projected/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-kube-api-access-f2d4j\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.033320 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.135629 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.135710 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.135774 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.135823 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.136005 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.136059 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.136104 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.136148 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2d4j\" (UniqueName: \"kubernetes.io/projected/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-kube-api-access-f2d4j\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.136768 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.138191 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.140276 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.140495 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.143069 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.143272 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.144304 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.144838 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.145100 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.156765 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2d4j\" (UniqueName: \"kubernetes.io/projected/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-kube-api-access-f2d4j\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x44rg\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.222221 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.814326 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg"] Jan 26 16:18:17 crc kubenswrapper[4713]: I0126 16:18:17.815876 4713 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:18:18 crc kubenswrapper[4713]: I0126 16:18:18.812377 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" event={"ID":"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7","Type":"ContainerStarted","Data":"bef3194e153eb51d1133f75dab47722e9d70a9825e88308b1587013142c4e8dc"} Jan 26 16:18:19 crc kubenswrapper[4713]: I0126 16:18:19.822769 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" event={"ID":"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7","Type":"ContainerStarted","Data":"dace8efdae7d13c793d2e101dc41ab6235d21bf8a42ae1b817d2abc903b6ab43"} Jan 26 16:18:19 crc kubenswrapper[4713]: I0126 16:18:19.842375 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" podStartSLOduration=3.269290508 podStartE2EDuration="3.842338384s" podCreationTimestamp="2026-01-26 16:18:16 +0000 UTC" firstStartedPulling="2026-01-26 16:18:17.815659483 +0000 UTC m=+2672.952676718" lastFinishedPulling="2026-01-26 16:18:18.388707359 +0000 UTC m=+2673.525724594" observedRunningTime="2026-01-26 16:18:19.837487389 +0000 UTC m=+2674.974504644" watchObservedRunningTime="2026-01-26 16:18:19.842338384 +0000 UTC m=+2674.979355629" Jan 26 16:19:30 crc kubenswrapper[4713]: I0126 16:19:30.310831 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-47zf9"] Jan 26 16:19:30 crc kubenswrapper[4713]: I0126 16:19:30.313582 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:30 crc kubenswrapper[4713]: I0126 16:19:30.325703 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-47zf9"] Jan 26 16:19:30 crc kubenswrapper[4713]: I0126 16:19:30.422723 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r72n7\" (UniqueName: \"kubernetes.io/projected/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-kube-api-access-r72n7\") pod \"redhat-operators-47zf9\" (UID: \"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4\") " pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:30 crc kubenswrapper[4713]: I0126 16:19:30.422799 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-utilities\") pod \"redhat-operators-47zf9\" (UID: \"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4\") " pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:30 crc kubenswrapper[4713]: I0126 16:19:30.423098 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-catalog-content\") pod \"redhat-operators-47zf9\" (UID: \"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4\") " pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:30 crc kubenswrapper[4713]: I0126 16:19:30.525558 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-catalog-content\") pod \"redhat-operators-47zf9\" (UID: \"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4\") " pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:30 crc kubenswrapper[4713]: I0126 16:19:30.525752 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r72n7\" (UniqueName: \"kubernetes.io/projected/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-kube-api-access-r72n7\") pod \"redhat-operators-47zf9\" (UID: \"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4\") " pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:30 crc kubenswrapper[4713]: I0126 16:19:30.525814 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-utilities\") pod \"redhat-operators-47zf9\" (UID: \"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4\") " pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:30 crc kubenswrapper[4713]: I0126 16:19:30.526043 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-catalog-content\") pod \"redhat-operators-47zf9\" (UID: \"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4\") " pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:30 crc kubenswrapper[4713]: I0126 16:19:30.526439 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-utilities\") pod \"redhat-operators-47zf9\" (UID: \"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4\") " pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:30 crc kubenswrapper[4713]: I0126 16:19:30.546728 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r72n7\" (UniqueName: \"kubernetes.io/projected/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-kube-api-access-r72n7\") pod \"redhat-operators-47zf9\" (UID: \"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4\") " pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:30 crc kubenswrapper[4713]: I0126 16:19:30.679209 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:31 crc kubenswrapper[4713]: I0126 16:19:31.222824 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-47zf9"] Jan 26 16:19:31 crc kubenswrapper[4713]: I0126 16:19:31.629634 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47zf9" event={"ID":"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4","Type":"ContainerStarted","Data":"18eb69549a336636c40f4ea5b9d01ed324d83d131b425222c29349936db3e549"} Jan 26 16:19:32 crc kubenswrapper[4713]: I0126 16:19:32.640170 4713 generic.go:334] "Generic (PLEG): container finished" podID="76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" containerID="acf17e051c05ed6631719711e19682d83b908096b417ae43be0f96471d584848" exitCode=0 Jan 26 16:19:32 crc kubenswrapper[4713]: I0126 16:19:32.640279 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47zf9" event={"ID":"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4","Type":"ContainerDied","Data":"acf17e051c05ed6631719711e19682d83b908096b417ae43be0f96471d584848"} Jan 26 16:19:33 crc kubenswrapper[4713]: I0126 16:19:33.301176 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:19:33 crc kubenswrapper[4713]: I0126 16:19:33.301250 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:19:33 crc kubenswrapper[4713]: I0126 16:19:33.658943 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47zf9" event={"ID":"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4","Type":"ContainerStarted","Data":"effddb31d8d23d7640562b30e73139aaab74691e82cba878eb214d166529fc58"} Jan 26 16:19:37 crc kubenswrapper[4713]: I0126 16:19:37.711393 4713 generic.go:334] "Generic (PLEG): container finished" podID="76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" containerID="effddb31d8d23d7640562b30e73139aaab74691e82cba878eb214d166529fc58" exitCode=0 Jan 26 16:19:37 crc kubenswrapper[4713]: I0126 16:19:37.711453 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47zf9" event={"ID":"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4","Type":"ContainerDied","Data":"effddb31d8d23d7640562b30e73139aaab74691e82cba878eb214d166529fc58"} Jan 26 16:19:38 crc kubenswrapper[4713]: I0126 16:19:38.723820 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47zf9" event={"ID":"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4","Type":"ContainerStarted","Data":"0e0196ddad344451582e2986b7d68f2de5b17d9ed0579d3f4d71f8d2dfacca2d"} Jan 26 16:19:38 crc kubenswrapper[4713]: I0126 16:19:38.746720 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-47zf9" podStartSLOduration=3.288553564 podStartE2EDuration="8.746704349s" podCreationTimestamp="2026-01-26 16:19:30 +0000 UTC" firstStartedPulling="2026-01-26 16:19:32.641951977 +0000 UTC m=+2747.778969202" lastFinishedPulling="2026-01-26 16:19:38.100102752 +0000 UTC m=+2753.237119987" observedRunningTime="2026-01-26 16:19:38.743031407 +0000 UTC m=+2753.880048642" watchObservedRunningTime="2026-01-26 16:19:38.746704349 +0000 UTC m=+2753.883721584" Jan 26 16:19:40 crc kubenswrapper[4713]: I0126 16:19:40.679625 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:40 crc kubenswrapper[4713]: I0126 16:19:40.680192 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:41 crc kubenswrapper[4713]: I0126 16:19:41.721164 4713 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-47zf9" podUID="76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" containerName="registry-server" probeResult="failure" output=< Jan 26 16:19:41 crc kubenswrapper[4713]: timeout: failed to connect service ":50051" within 1s Jan 26 16:19:41 crc kubenswrapper[4713]: > Jan 26 16:19:50 crc kubenswrapper[4713]: I0126 16:19:50.724158 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:50 crc kubenswrapper[4713]: I0126 16:19:50.771678 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:50 crc kubenswrapper[4713]: I0126 16:19:50.959211 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-47zf9"] Jan 26 16:19:51 crc kubenswrapper[4713]: I0126 16:19:51.866293 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-47zf9" podUID="76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" containerName="registry-server" containerID="cri-o://0e0196ddad344451582e2986b7d68f2de5b17d9ed0579d3f4d71f8d2dfacca2d" gracePeriod=2 Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.428354 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.543385 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-utilities\") pod \"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4\" (UID: \"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4\") " Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.543749 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r72n7\" (UniqueName: \"kubernetes.io/projected/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-kube-api-access-r72n7\") pod \"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4\" (UID: \"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4\") " Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.543791 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-catalog-content\") pod \"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4\" (UID: \"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4\") " Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.544535 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-utilities" (OuterVolumeSpecName: "utilities") pod "76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" (UID: "76f1edd1-7b15-4268-a9d2-ab533fdaa9a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.553495 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-kube-api-access-r72n7" (OuterVolumeSpecName: "kube-api-access-r72n7") pod "76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" (UID: "76f1edd1-7b15-4268-a9d2-ab533fdaa9a4"). InnerVolumeSpecName "kube-api-access-r72n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.646709 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.646771 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r72n7\" (UniqueName: \"kubernetes.io/projected/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-kube-api-access-r72n7\") on node \"crc\" DevicePath \"\"" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.691927 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" (UID: "76f1edd1-7b15-4268-a9d2-ab533fdaa9a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.748665 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.876444 4713 generic.go:334] "Generic (PLEG): container finished" podID="76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" containerID="0e0196ddad344451582e2986b7d68f2de5b17d9ed0579d3f4d71f8d2dfacca2d" exitCode=0 Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.876496 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47zf9" event={"ID":"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4","Type":"ContainerDied","Data":"0e0196ddad344451582e2986b7d68f2de5b17d9ed0579d3f4d71f8d2dfacca2d"} Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.876529 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47zf9" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.876551 4713 scope.go:117] "RemoveContainer" containerID="0e0196ddad344451582e2986b7d68f2de5b17d9ed0579d3f4d71f8d2dfacca2d" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.876535 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47zf9" event={"ID":"76f1edd1-7b15-4268-a9d2-ab533fdaa9a4","Type":"ContainerDied","Data":"18eb69549a336636c40f4ea5b9d01ed324d83d131b425222c29349936db3e549"} Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.912499 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-47zf9"] Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.918686 4713 scope.go:117] "RemoveContainer" containerID="effddb31d8d23d7640562b30e73139aaab74691e82cba878eb214d166529fc58" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.925992 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-47zf9"] Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.945929 4713 scope.go:117] "RemoveContainer" containerID="acf17e051c05ed6631719711e19682d83b908096b417ae43be0f96471d584848" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.998154 4713 scope.go:117] "RemoveContainer" containerID="0e0196ddad344451582e2986b7d68f2de5b17d9ed0579d3f4d71f8d2dfacca2d" Jan 26 16:19:52 crc kubenswrapper[4713]: E0126 16:19:52.998709 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e0196ddad344451582e2986b7d68f2de5b17d9ed0579d3f4d71f8d2dfacca2d\": container with ID starting with 0e0196ddad344451582e2986b7d68f2de5b17d9ed0579d3f4d71f8d2dfacca2d not found: ID does not exist" containerID="0e0196ddad344451582e2986b7d68f2de5b17d9ed0579d3f4d71f8d2dfacca2d" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.998744 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0196ddad344451582e2986b7d68f2de5b17d9ed0579d3f4d71f8d2dfacca2d"} err="failed to get container status \"0e0196ddad344451582e2986b7d68f2de5b17d9ed0579d3f4d71f8d2dfacca2d\": rpc error: code = NotFound desc = could not find container \"0e0196ddad344451582e2986b7d68f2de5b17d9ed0579d3f4d71f8d2dfacca2d\": container with ID starting with 0e0196ddad344451582e2986b7d68f2de5b17d9ed0579d3f4d71f8d2dfacca2d not found: ID does not exist" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.998770 4713 scope.go:117] "RemoveContainer" containerID="effddb31d8d23d7640562b30e73139aaab74691e82cba878eb214d166529fc58" Jan 26 16:19:52 crc kubenswrapper[4713]: E0126 16:19:52.999017 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"effddb31d8d23d7640562b30e73139aaab74691e82cba878eb214d166529fc58\": container with ID starting with effddb31d8d23d7640562b30e73139aaab74691e82cba878eb214d166529fc58 not found: ID does not exist" containerID="effddb31d8d23d7640562b30e73139aaab74691e82cba878eb214d166529fc58" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.999053 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"effddb31d8d23d7640562b30e73139aaab74691e82cba878eb214d166529fc58"} err="failed to get container status \"effddb31d8d23d7640562b30e73139aaab74691e82cba878eb214d166529fc58\": rpc error: code = NotFound desc = could not find container \"effddb31d8d23d7640562b30e73139aaab74691e82cba878eb214d166529fc58\": container with ID starting with effddb31d8d23d7640562b30e73139aaab74691e82cba878eb214d166529fc58 not found: ID does not exist" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.999067 4713 scope.go:117] "RemoveContainer" containerID="acf17e051c05ed6631719711e19682d83b908096b417ae43be0f96471d584848" Jan 26 16:19:52 crc kubenswrapper[4713]: E0126 16:19:52.999316 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf17e051c05ed6631719711e19682d83b908096b417ae43be0f96471d584848\": container with ID starting with acf17e051c05ed6631719711e19682d83b908096b417ae43be0f96471d584848 not found: ID does not exist" containerID="acf17e051c05ed6631719711e19682d83b908096b417ae43be0f96471d584848" Jan 26 16:19:52 crc kubenswrapper[4713]: I0126 16:19:52.999338 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf17e051c05ed6631719711e19682d83b908096b417ae43be0f96471d584848"} err="failed to get container status \"acf17e051c05ed6631719711e19682d83b908096b417ae43be0f96471d584848\": rpc error: code = NotFound desc = could not find container \"acf17e051c05ed6631719711e19682d83b908096b417ae43be0f96471d584848\": container with ID starting with acf17e051c05ed6631719711e19682d83b908096b417ae43be0f96471d584848 not found: ID does not exist" Jan 26 16:19:53 crc kubenswrapper[4713]: I0126 16:19:53.818251 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" path="/var/lib/kubelet/pods/76f1edd1-7b15-4268-a9d2-ab533fdaa9a4/volumes" Jan 26 16:20:03 crc kubenswrapper[4713]: I0126 16:20:03.301853 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:20:03 crc kubenswrapper[4713]: I0126 16:20:03.303597 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:20:33 crc kubenswrapper[4713]: I0126 16:20:33.301467 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:20:33 crc kubenswrapper[4713]: I0126 16:20:33.302081 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:20:33 crc kubenswrapper[4713]: I0126 16:20:33.302136 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 16:20:33 crc kubenswrapper[4713]: I0126 16:20:33.303259 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d03474c3ce9ee80cf039013a706e0db548f7a66785997b0ed513ed768260d0f"} pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:20:33 crc kubenswrapper[4713]: I0126 16:20:33.303351 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" containerID="cri-o://4d03474c3ce9ee80cf039013a706e0db548f7a66785997b0ed513ed768260d0f" gracePeriod=600 Jan 26 16:20:34 crc kubenswrapper[4713]: I0126 16:20:34.314253 4713 generic.go:334] "Generic (PLEG): container finished" podID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerID="4d03474c3ce9ee80cf039013a706e0db548f7a66785997b0ed513ed768260d0f" exitCode=0 Jan 26 16:20:34 crc kubenswrapper[4713]: I0126 16:20:34.314490 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerDied","Data":"4d03474c3ce9ee80cf039013a706e0db548f7a66785997b0ed513ed768260d0f"} Jan 26 16:20:34 crc kubenswrapper[4713]: I0126 16:20:34.315006 4713 scope.go:117] "RemoveContainer" containerID="a055f1e5ccfc64c449c1f82ea18590e4749b7525b35ec94dc9d08301e8497761" Jan 26 16:20:35 crc kubenswrapper[4713]: I0126 16:20:35.329611 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b"} Jan 26 16:20:37 crc kubenswrapper[4713]: I0126 16:20:37.352493 4713 generic.go:334] "Generic (PLEG): container finished" podID="9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7" containerID="dace8efdae7d13c793d2e101dc41ab6235d21bf8a42ae1b817d2abc903b6ab43" exitCode=0 Jan 26 16:20:37 crc kubenswrapper[4713]: I0126 16:20:37.352737 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" event={"ID":"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7","Type":"ContainerDied","Data":"dace8efdae7d13c793d2e101dc41ab6235d21bf8a42ae1b817d2abc903b6ab43"} Jan 26 16:20:38 crc kubenswrapper[4713]: I0126 16:20:38.903679 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.001732 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-migration-ssh-key-0\") pod \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.001897 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-extra-config-0\") pod \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.001982 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-inventory\") pod \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.002074 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-migration-ssh-key-1\") pod \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.002191 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-ssh-key-openstack-edpm-ipam\") pod \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.002258 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-combined-ca-bundle\") pod \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.002325 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2d4j\" (UniqueName: \"kubernetes.io/projected/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-kube-api-access-f2d4j\") pod \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.003157 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-cell1-compute-config-1\") pod \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.004792 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-cell1-compute-config-0\") pod \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\" (UID: \"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7\") " Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.009013 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7" (UID: "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.011085 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-kube-api-access-f2d4j" (OuterVolumeSpecName: "kube-api-access-f2d4j") pod "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7" (UID: "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7"). InnerVolumeSpecName "kube-api-access-f2d4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.028867 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-inventory" (OuterVolumeSpecName: "inventory") pod "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7" (UID: "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.037564 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7" (UID: "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.037900 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7" (UID: "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.039134 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7" (UID: "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.041113 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7" (UID: "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.043918 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7" (UID: "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.056849 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7" (UID: "9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.108348 4713 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.108402 4713 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.108415 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.108430 4713 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.108441 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.108453 4713 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.108518 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2d4j\" (UniqueName: \"kubernetes.io/projected/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-kube-api-access-f2d4j\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.108530 4713 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.108540 4713 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.377603 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" event={"ID":"9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7","Type":"ContainerDied","Data":"bef3194e153eb51d1133f75dab47722e9d70a9825e88308b1587013142c4e8dc"} Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.377951 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bef3194e153eb51d1133f75dab47722e9d70a9825e88308b1587013142c4e8dc" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.377701 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x44rg" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.500320 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b"] Jan 26 16:20:39 crc kubenswrapper[4713]: E0126 16:20:39.501012 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" containerName="extract-content" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.501042 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" containerName="extract-content" Jan 26 16:20:39 crc kubenswrapper[4713]: E0126 16:20:39.501086 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" containerName="extract-utilities" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.501098 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" containerName="extract-utilities" Jan 26 16:20:39 crc kubenswrapper[4713]: E0126 16:20:39.501126 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" containerName="registry-server" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.501138 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" containerName="registry-server" Jan 26 16:20:39 crc kubenswrapper[4713]: E0126 16:20:39.501170 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.501181 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.501475 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.501513 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="76f1edd1-7b15-4268-a9d2-ab533fdaa9a4" containerName="registry-server" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.502693 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.505208 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.505559 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xs5x5" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.506398 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.508217 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.513227 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b"] Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.515460 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.624204 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.624266 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnxpp\" (UniqueName: \"kubernetes.io/projected/a4c0ccc6-3259-4551-be60-b8b5599884fa-kube-api-access-gnxpp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.624894 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.624940 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.625101 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.625143 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.625268 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.726788 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.726846 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.726909 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.726956 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.726990 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnxpp\" (UniqueName: \"kubernetes.io/projected/a4c0ccc6-3259-4551-be60-b8b5599884fa-kube-api-access-gnxpp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.727356 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.727429 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.731860 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.732419 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.732754 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.733186 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.733367 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.734771 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.746637 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnxpp\" (UniqueName: \"kubernetes.io/projected/a4c0ccc6-3259-4551-be60-b8b5599884fa-kube-api-access-gnxpp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:39 crc kubenswrapper[4713]: I0126 16:20:39.832934 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:20:40 crc kubenswrapper[4713]: I0126 16:20:40.463252 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b"] Jan 26 16:20:41 crc kubenswrapper[4713]: I0126 16:20:41.402316 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" event={"ID":"a4c0ccc6-3259-4551-be60-b8b5599884fa","Type":"ContainerStarted","Data":"1766800f1d760b5fb492cf7d595e726fc2cb2b07277872d0280283aee934796a"} Jan 26 16:20:41 crc kubenswrapper[4713]: I0126 16:20:41.403101 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" event={"ID":"a4c0ccc6-3259-4551-be60-b8b5599884fa","Type":"ContainerStarted","Data":"69ba7879c1b6c3ae3903b0ad25157a745b669091228633b13d06e26ab6e2c469"} Jan 26 16:20:41 crc kubenswrapper[4713]: I0126 16:20:41.434789 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" podStartSLOduration=1.788395919 podStartE2EDuration="2.434764449s" podCreationTimestamp="2026-01-26 16:20:39 +0000 UTC" firstStartedPulling="2026-01-26 16:20:40.472484591 +0000 UTC m=+2815.609501866" lastFinishedPulling="2026-01-26 16:20:41.118853151 +0000 UTC m=+2816.255870396" observedRunningTime="2026-01-26 16:20:41.421031276 +0000 UTC m=+2816.558048521" watchObservedRunningTime="2026-01-26 16:20:41.434764449 +0000 UTC m=+2816.571781684" Jan 26 16:20:49 crc kubenswrapper[4713]: I0126 16:20:49.818462 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vjthw"] Jan 26 16:20:49 crc kubenswrapper[4713]: I0126 16:20:49.827620 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:20:49 crc kubenswrapper[4713]: I0126 16:20:49.873322 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjthw"] Jan 26 16:20:49 crc kubenswrapper[4713]: I0126 16:20:49.902959 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88882565-b750-41f0-9928-03017fde5613-utilities\") pod \"redhat-marketplace-vjthw\" (UID: \"88882565-b750-41f0-9928-03017fde5613\") " pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:20:49 crc kubenswrapper[4713]: I0126 16:20:49.903097 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjmjx\" (UniqueName: \"kubernetes.io/projected/88882565-b750-41f0-9928-03017fde5613-kube-api-access-tjmjx\") pod \"redhat-marketplace-vjthw\" (UID: \"88882565-b750-41f0-9928-03017fde5613\") " pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:20:49 crc kubenswrapper[4713]: I0126 16:20:49.903205 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88882565-b750-41f0-9928-03017fde5613-catalog-content\") pod \"redhat-marketplace-vjthw\" (UID: \"88882565-b750-41f0-9928-03017fde5613\") " pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:20:50 crc kubenswrapper[4713]: I0126 16:20:50.005812 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88882565-b750-41f0-9928-03017fde5613-utilities\") pod \"redhat-marketplace-vjthw\" (UID: \"88882565-b750-41f0-9928-03017fde5613\") " pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:20:50 crc kubenswrapper[4713]: I0126 16:20:50.006247 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjmjx\" (UniqueName: \"kubernetes.io/projected/88882565-b750-41f0-9928-03017fde5613-kube-api-access-tjmjx\") pod \"redhat-marketplace-vjthw\" (UID: \"88882565-b750-41f0-9928-03017fde5613\") " pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:20:50 crc kubenswrapper[4713]: I0126 16:20:50.006313 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88882565-b750-41f0-9928-03017fde5613-catalog-content\") pod \"redhat-marketplace-vjthw\" (UID: \"88882565-b750-41f0-9928-03017fde5613\") " pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:20:50 crc kubenswrapper[4713]: I0126 16:20:50.006458 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88882565-b750-41f0-9928-03017fde5613-utilities\") pod \"redhat-marketplace-vjthw\" (UID: \"88882565-b750-41f0-9928-03017fde5613\") " pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:20:50 crc kubenswrapper[4713]: I0126 16:20:50.006737 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88882565-b750-41f0-9928-03017fde5613-catalog-content\") pod \"redhat-marketplace-vjthw\" (UID: \"88882565-b750-41f0-9928-03017fde5613\") " pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:20:50 crc kubenswrapper[4713]: I0126 16:20:50.028287 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjmjx\" (UniqueName: \"kubernetes.io/projected/88882565-b750-41f0-9928-03017fde5613-kube-api-access-tjmjx\") pod \"redhat-marketplace-vjthw\" (UID: \"88882565-b750-41f0-9928-03017fde5613\") " pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:20:50 crc kubenswrapper[4713]: I0126 16:20:50.144053 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:20:50 crc kubenswrapper[4713]: W0126 16:20:50.723646 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88882565_b750_41f0_9928_03017fde5613.slice/crio-c0c9e09d1469230ff042b5ae5e77ef5efdab45ec5f7b740e2588298c49a535fb WatchSource:0}: Error finding container c0c9e09d1469230ff042b5ae5e77ef5efdab45ec5f7b740e2588298c49a535fb: Status 404 returned error can't find the container with id c0c9e09d1469230ff042b5ae5e77ef5efdab45ec5f7b740e2588298c49a535fb Jan 26 16:20:50 crc kubenswrapper[4713]: I0126 16:20:50.723931 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjthw"] Jan 26 16:20:51 crc kubenswrapper[4713]: I0126 16:20:51.493142 4713 generic.go:334] "Generic (PLEG): container finished" podID="88882565-b750-41f0-9928-03017fde5613" containerID="0c8807ca45dd26ec720368e877c7e7712fc12224834704b32704e75f4cc1b0b7" exitCode=0 Jan 26 16:20:51 crc kubenswrapper[4713]: I0126 16:20:51.494272 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjthw" event={"ID":"88882565-b750-41f0-9928-03017fde5613","Type":"ContainerDied","Data":"0c8807ca45dd26ec720368e877c7e7712fc12224834704b32704e75f4cc1b0b7"} Jan 26 16:20:51 crc kubenswrapper[4713]: I0126 16:20:51.494431 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjthw" event={"ID":"88882565-b750-41f0-9928-03017fde5613","Type":"ContainerStarted","Data":"c0c9e09d1469230ff042b5ae5e77ef5efdab45ec5f7b740e2588298c49a535fb"} Jan 26 16:20:53 crc kubenswrapper[4713]: I0126 16:20:53.514926 4713 generic.go:334] "Generic (PLEG): container finished" podID="88882565-b750-41f0-9928-03017fde5613" containerID="7c794cfc0d1988105aef2f8d61691fe228a2c3ff63ee0369437f658971aaa0bc" exitCode=0 Jan 26 16:20:53 crc kubenswrapper[4713]: I0126 16:20:53.515021 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjthw" event={"ID":"88882565-b750-41f0-9928-03017fde5613","Type":"ContainerDied","Data":"7c794cfc0d1988105aef2f8d61691fe228a2c3ff63ee0369437f658971aaa0bc"} Jan 26 16:20:54 crc kubenswrapper[4713]: I0126 16:20:54.528954 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjthw" event={"ID":"88882565-b750-41f0-9928-03017fde5613","Type":"ContainerStarted","Data":"f438e6f5ae852fd02be615fd635b8b3691b045263b91c4f4845969e47bfac0b9"} Jan 26 16:20:54 crc kubenswrapper[4713]: I0126 16:20:54.552594 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vjthw" podStartSLOduration=2.835509178 podStartE2EDuration="5.552572606s" podCreationTimestamp="2026-01-26 16:20:49 +0000 UTC" firstStartedPulling="2026-01-26 16:20:51.49662403 +0000 UTC m=+2826.633641265" lastFinishedPulling="2026-01-26 16:20:54.213687458 +0000 UTC m=+2829.350704693" observedRunningTime="2026-01-26 16:20:54.546401544 +0000 UTC m=+2829.683418779" watchObservedRunningTime="2026-01-26 16:20:54.552572606 +0000 UTC m=+2829.689589841" Jan 26 16:21:00 crc kubenswrapper[4713]: I0126 16:21:00.144220 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:21:00 crc kubenswrapper[4713]: I0126 16:21:00.144780 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:21:00 crc kubenswrapper[4713]: I0126 16:21:00.210167 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:21:00 crc kubenswrapper[4713]: I0126 16:21:00.674286 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:21:00 crc kubenswrapper[4713]: I0126 16:21:00.793268 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjthw"] Jan 26 16:21:02 crc kubenswrapper[4713]: I0126 16:21:02.606653 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vjthw" podUID="88882565-b750-41f0-9928-03017fde5613" containerName="registry-server" containerID="cri-o://f438e6f5ae852fd02be615fd635b8b3691b045263b91c4f4845969e47bfac0b9" gracePeriod=2 Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.172111 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.303490 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjmjx\" (UniqueName: \"kubernetes.io/projected/88882565-b750-41f0-9928-03017fde5613-kube-api-access-tjmjx\") pod \"88882565-b750-41f0-9928-03017fde5613\" (UID: \"88882565-b750-41f0-9928-03017fde5613\") " Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.303634 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88882565-b750-41f0-9928-03017fde5613-utilities\") pod \"88882565-b750-41f0-9928-03017fde5613\" (UID: \"88882565-b750-41f0-9928-03017fde5613\") " Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.303804 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88882565-b750-41f0-9928-03017fde5613-catalog-content\") pod \"88882565-b750-41f0-9928-03017fde5613\" (UID: \"88882565-b750-41f0-9928-03017fde5613\") " Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.305011 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88882565-b750-41f0-9928-03017fde5613-utilities" (OuterVolumeSpecName: "utilities") pod "88882565-b750-41f0-9928-03017fde5613" (UID: "88882565-b750-41f0-9928-03017fde5613"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.310129 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88882565-b750-41f0-9928-03017fde5613-kube-api-access-tjmjx" (OuterVolumeSpecName: "kube-api-access-tjmjx") pod "88882565-b750-41f0-9928-03017fde5613" (UID: "88882565-b750-41f0-9928-03017fde5613"). InnerVolumeSpecName "kube-api-access-tjmjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.333012 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88882565-b750-41f0-9928-03017fde5613-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88882565-b750-41f0-9928-03017fde5613" (UID: "88882565-b750-41f0-9928-03017fde5613"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.406718 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjmjx\" (UniqueName: \"kubernetes.io/projected/88882565-b750-41f0-9928-03017fde5613-kube-api-access-tjmjx\") on node \"crc\" DevicePath \"\"" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.406779 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88882565-b750-41f0-9928-03017fde5613-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.406794 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88882565-b750-41f0-9928-03017fde5613-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.643240 4713 generic.go:334] "Generic (PLEG): container finished" podID="88882565-b750-41f0-9928-03017fde5613" containerID="f438e6f5ae852fd02be615fd635b8b3691b045263b91c4f4845969e47bfac0b9" exitCode=0 Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.643301 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjthw" event={"ID":"88882565-b750-41f0-9928-03017fde5613","Type":"ContainerDied","Data":"f438e6f5ae852fd02be615fd635b8b3691b045263b91c4f4845969e47bfac0b9"} Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.643333 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjthw" event={"ID":"88882565-b750-41f0-9928-03017fde5613","Type":"ContainerDied","Data":"c0c9e09d1469230ff042b5ae5e77ef5efdab45ec5f7b740e2588298c49a535fb"} Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.643418 4713 scope.go:117] "RemoveContainer" containerID="f438e6f5ae852fd02be615fd635b8b3691b045263b91c4f4845969e47bfac0b9" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.647981 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjthw" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.671649 4713 scope.go:117] "RemoveContainer" containerID="7c794cfc0d1988105aef2f8d61691fe228a2c3ff63ee0369437f658971aaa0bc" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.688041 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjthw"] Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.710865 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjthw"] Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.714166 4713 scope.go:117] "RemoveContainer" containerID="0c8807ca45dd26ec720368e877c7e7712fc12224834704b32704e75f4cc1b0b7" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.749552 4713 scope.go:117] "RemoveContainer" containerID="f438e6f5ae852fd02be615fd635b8b3691b045263b91c4f4845969e47bfac0b9" Jan 26 16:21:03 crc kubenswrapper[4713]: E0126 16:21:03.750030 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f438e6f5ae852fd02be615fd635b8b3691b045263b91c4f4845969e47bfac0b9\": container with ID starting with f438e6f5ae852fd02be615fd635b8b3691b045263b91c4f4845969e47bfac0b9 not found: ID does not exist" containerID="f438e6f5ae852fd02be615fd635b8b3691b045263b91c4f4845969e47bfac0b9" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.750075 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f438e6f5ae852fd02be615fd635b8b3691b045263b91c4f4845969e47bfac0b9"} err="failed to get container status \"f438e6f5ae852fd02be615fd635b8b3691b045263b91c4f4845969e47bfac0b9\": rpc error: code = NotFound desc = could not find container \"f438e6f5ae852fd02be615fd635b8b3691b045263b91c4f4845969e47bfac0b9\": container with ID starting with f438e6f5ae852fd02be615fd635b8b3691b045263b91c4f4845969e47bfac0b9 not found: ID does not exist" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.750105 4713 scope.go:117] "RemoveContainer" containerID="7c794cfc0d1988105aef2f8d61691fe228a2c3ff63ee0369437f658971aaa0bc" Jan 26 16:21:03 crc kubenswrapper[4713]: E0126 16:21:03.750725 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c794cfc0d1988105aef2f8d61691fe228a2c3ff63ee0369437f658971aaa0bc\": container with ID starting with 7c794cfc0d1988105aef2f8d61691fe228a2c3ff63ee0369437f658971aaa0bc not found: ID does not exist" containerID="7c794cfc0d1988105aef2f8d61691fe228a2c3ff63ee0369437f658971aaa0bc" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.750745 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c794cfc0d1988105aef2f8d61691fe228a2c3ff63ee0369437f658971aaa0bc"} err="failed to get container status \"7c794cfc0d1988105aef2f8d61691fe228a2c3ff63ee0369437f658971aaa0bc\": rpc error: code = NotFound desc = could not find container \"7c794cfc0d1988105aef2f8d61691fe228a2c3ff63ee0369437f658971aaa0bc\": container with ID starting with 7c794cfc0d1988105aef2f8d61691fe228a2c3ff63ee0369437f658971aaa0bc not found: ID does not exist" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.750757 4713 scope.go:117] "RemoveContainer" containerID="0c8807ca45dd26ec720368e877c7e7712fc12224834704b32704e75f4cc1b0b7" Jan 26 16:21:03 crc kubenswrapper[4713]: E0126 16:21:03.750953 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c8807ca45dd26ec720368e877c7e7712fc12224834704b32704e75f4cc1b0b7\": container with ID starting with 0c8807ca45dd26ec720368e877c7e7712fc12224834704b32704e75f4cc1b0b7 not found: ID does not exist" containerID="0c8807ca45dd26ec720368e877c7e7712fc12224834704b32704e75f4cc1b0b7" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.750971 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c8807ca45dd26ec720368e877c7e7712fc12224834704b32704e75f4cc1b0b7"} err="failed to get container status \"0c8807ca45dd26ec720368e877c7e7712fc12224834704b32704e75f4cc1b0b7\": rpc error: code = NotFound desc = could not find container \"0c8807ca45dd26ec720368e877c7e7712fc12224834704b32704e75f4cc1b0b7\": container with ID starting with 0c8807ca45dd26ec720368e877c7e7712fc12224834704b32704e75f4cc1b0b7 not found: ID does not exist" Jan 26 16:21:03 crc kubenswrapper[4713]: I0126 16:21:03.814632 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88882565-b750-41f0-9928-03017fde5613" path="/var/lib/kubelet/pods/88882565-b750-41f0-9928-03017fde5613/volumes" Jan 26 16:21:59 crc kubenswrapper[4713]: I0126 16:21:59.855207 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c4rrn"] Jan 26 16:21:59 crc kubenswrapper[4713]: E0126 16:21:59.861765 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88882565-b750-41f0-9928-03017fde5613" containerName="registry-server" Jan 26 16:21:59 crc kubenswrapper[4713]: I0126 16:21:59.861806 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="88882565-b750-41f0-9928-03017fde5613" containerName="registry-server" Jan 26 16:21:59 crc kubenswrapper[4713]: E0126 16:21:59.861830 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88882565-b750-41f0-9928-03017fde5613" containerName="extract-content" Jan 26 16:21:59 crc kubenswrapper[4713]: I0126 16:21:59.861839 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="88882565-b750-41f0-9928-03017fde5613" containerName="extract-content" Jan 26 16:21:59 crc kubenswrapper[4713]: E0126 16:21:59.861867 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88882565-b750-41f0-9928-03017fde5613" containerName="extract-utilities" Jan 26 16:21:59 crc kubenswrapper[4713]: I0126 16:21:59.861876 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="88882565-b750-41f0-9928-03017fde5613" containerName="extract-utilities" Jan 26 16:21:59 crc kubenswrapper[4713]: I0126 16:21:59.862114 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="88882565-b750-41f0-9928-03017fde5613" containerName="registry-server" Jan 26 16:21:59 crc kubenswrapper[4713]: I0126 16:21:59.864128 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:21:59 crc kubenswrapper[4713]: I0126 16:21:59.875215 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4rrn"] Jan 26 16:21:59 crc kubenswrapper[4713]: I0126 16:21:59.907494 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85b0c991-af56-4d9e-8052-bf4d5fdc4669-utilities\") pod \"community-operators-c4rrn\" (UID: \"85b0c991-af56-4d9e-8052-bf4d5fdc4669\") " pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:21:59 crc kubenswrapper[4713]: I0126 16:21:59.907543 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85b0c991-af56-4d9e-8052-bf4d5fdc4669-catalog-content\") pod \"community-operators-c4rrn\" (UID: \"85b0c991-af56-4d9e-8052-bf4d5fdc4669\") " pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:21:59 crc kubenswrapper[4713]: I0126 16:21:59.907571 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm8dd\" (UniqueName: \"kubernetes.io/projected/85b0c991-af56-4d9e-8052-bf4d5fdc4669-kube-api-access-dm8dd\") pod \"community-operators-c4rrn\" (UID: \"85b0c991-af56-4d9e-8052-bf4d5fdc4669\") " pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:22:00 crc kubenswrapper[4713]: I0126 16:22:00.009119 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85b0c991-af56-4d9e-8052-bf4d5fdc4669-utilities\") pod \"community-operators-c4rrn\" (UID: \"85b0c991-af56-4d9e-8052-bf4d5fdc4669\") " pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:22:00 crc kubenswrapper[4713]: I0126 16:22:00.009158 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85b0c991-af56-4d9e-8052-bf4d5fdc4669-catalog-content\") pod \"community-operators-c4rrn\" (UID: \"85b0c991-af56-4d9e-8052-bf4d5fdc4669\") " pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:22:00 crc kubenswrapper[4713]: I0126 16:22:00.009181 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm8dd\" (UniqueName: \"kubernetes.io/projected/85b0c991-af56-4d9e-8052-bf4d5fdc4669-kube-api-access-dm8dd\") pod \"community-operators-c4rrn\" (UID: \"85b0c991-af56-4d9e-8052-bf4d5fdc4669\") " pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:22:00 crc kubenswrapper[4713]: I0126 16:22:00.009805 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85b0c991-af56-4d9e-8052-bf4d5fdc4669-catalog-content\") pod \"community-operators-c4rrn\" (UID: \"85b0c991-af56-4d9e-8052-bf4d5fdc4669\") " pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:22:00 crc kubenswrapper[4713]: I0126 16:22:00.009869 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85b0c991-af56-4d9e-8052-bf4d5fdc4669-utilities\") pod \"community-operators-c4rrn\" (UID: \"85b0c991-af56-4d9e-8052-bf4d5fdc4669\") " pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:22:00 crc kubenswrapper[4713]: I0126 16:22:00.028999 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm8dd\" (UniqueName: \"kubernetes.io/projected/85b0c991-af56-4d9e-8052-bf4d5fdc4669-kube-api-access-dm8dd\") pod \"community-operators-c4rrn\" (UID: \"85b0c991-af56-4d9e-8052-bf4d5fdc4669\") " pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:22:00 crc kubenswrapper[4713]: I0126 16:22:00.188544 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:22:00 crc kubenswrapper[4713]: I0126 16:22:00.728926 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4rrn"] Jan 26 16:22:00 crc kubenswrapper[4713]: W0126 16:22:00.733893 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85b0c991_af56_4d9e_8052_bf4d5fdc4669.slice/crio-b62bc7b74994a67ac35fe7a47be5a198a07be4123ad907cde80aa1df4f046aa4 WatchSource:0}: Error finding container b62bc7b74994a67ac35fe7a47be5a198a07be4123ad907cde80aa1df4f046aa4: Status 404 returned error can't find the container with id b62bc7b74994a67ac35fe7a47be5a198a07be4123ad907cde80aa1df4f046aa4 Jan 26 16:22:01 crc kubenswrapper[4713]: I0126 16:22:01.227218 4713 generic.go:334] "Generic (PLEG): container finished" podID="85b0c991-af56-4d9e-8052-bf4d5fdc4669" containerID="d7acf6c2054398749f38618218fadc286c943e8532a0a783eb76bc8f469c6ea5" exitCode=0 Jan 26 16:22:01 crc kubenswrapper[4713]: I0126 16:22:01.227508 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4rrn" event={"ID":"85b0c991-af56-4d9e-8052-bf4d5fdc4669","Type":"ContainerDied","Data":"d7acf6c2054398749f38618218fadc286c943e8532a0a783eb76bc8f469c6ea5"} Jan 26 16:22:01 crc kubenswrapper[4713]: I0126 16:22:01.227540 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4rrn" event={"ID":"85b0c991-af56-4d9e-8052-bf4d5fdc4669","Type":"ContainerStarted","Data":"b62bc7b74994a67ac35fe7a47be5a198a07be4123ad907cde80aa1df4f046aa4"} Jan 26 16:22:03 crc kubenswrapper[4713]: I0126 16:22:03.248635 4713 generic.go:334] "Generic (PLEG): container finished" podID="85b0c991-af56-4d9e-8052-bf4d5fdc4669" containerID="c499a90f2639f648d1b5dbe767fdfa7e5905975625c26d0dc575e6ea4044bf96" exitCode=0 Jan 26 16:22:03 crc kubenswrapper[4713]: I0126 16:22:03.249031 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4rrn" event={"ID":"85b0c991-af56-4d9e-8052-bf4d5fdc4669","Type":"ContainerDied","Data":"c499a90f2639f648d1b5dbe767fdfa7e5905975625c26d0dc575e6ea4044bf96"} Jan 26 16:22:04 crc kubenswrapper[4713]: I0126 16:22:04.264006 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4rrn" event={"ID":"85b0c991-af56-4d9e-8052-bf4d5fdc4669","Type":"ContainerStarted","Data":"9a53a06cae2ead058ae5dc08352ead5c0ad50c1ae70b44ee7d2cf21cb870e7bd"} Jan 26 16:22:04 crc kubenswrapper[4713]: I0126 16:22:04.294730 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c4rrn" podStartSLOduration=2.782024202 podStartE2EDuration="5.294704593s" podCreationTimestamp="2026-01-26 16:21:59 +0000 UTC" firstStartedPulling="2026-01-26 16:22:01.229170719 +0000 UTC m=+2896.366187964" lastFinishedPulling="2026-01-26 16:22:03.74185111 +0000 UTC m=+2898.878868355" observedRunningTime="2026-01-26 16:22:04.284270242 +0000 UTC m=+2899.421287517" watchObservedRunningTime="2026-01-26 16:22:04.294704593 +0000 UTC m=+2899.431721858" Jan 26 16:22:10 crc kubenswrapper[4713]: I0126 16:22:10.189083 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:22:10 crc kubenswrapper[4713]: I0126 16:22:10.189714 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:22:10 crc kubenswrapper[4713]: I0126 16:22:10.263903 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:22:10 crc kubenswrapper[4713]: I0126 16:22:10.391779 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:22:10 crc kubenswrapper[4713]: I0126 16:22:10.510861 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4rrn"] Jan 26 16:22:12 crc kubenswrapper[4713]: I0126 16:22:12.356848 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c4rrn" podUID="85b0c991-af56-4d9e-8052-bf4d5fdc4669" containerName="registry-server" containerID="cri-o://9a53a06cae2ead058ae5dc08352ead5c0ad50c1ae70b44ee7d2cf21cb870e7bd" gracePeriod=2 Jan 26 16:22:12 crc kubenswrapper[4713]: I0126 16:22:12.936708 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.069535 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85b0c991-af56-4d9e-8052-bf4d5fdc4669-utilities\") pod \"85b0c991-af56-4d9e-8052-bf4d5fdc4669\" (UID: \"85b0c991-af56-4d9e-8052-bf4d5fdc4669\") " Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.069617 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm8dd\" (UniqueName: \"kubernetes.io/projected/85b0c991-af56-4d9e-8052-bf4d5fdc4669-kube-api-access-dm8dd\") pod \"85b0c991-af56-4d9e-8052-bf4d5fdc4669\" (UID: \"85b0c991-af56-4d9e-8052-bf4d5fdc4669\") " Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.069658 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85b0c991-af56-4d9e-8052-bf4d5fdc4669-catalog-content\") pod \"85b0c991-af56-4d9e-8052-bf4d5fdc4669\" (UID: \"85b0c991-af56-4d9e-8052-bf4d5fdc4669\") " Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.070961 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85b0c991-af56-4d9e-8052-bf4d5fdc4669-utilities" (OuterVolumeSpecName: "utilities") pod "85b0c991-af56-4d9e-8052-bf4d5fdc4669" (UID: "85b0c991-af56-4d9e-8052-bf4d5fdc4669"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.076008 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b0c991-af56-4d9e-8052-bf4d5fdc4669-kube-api-access-dm8dd" (OuterVolumeSpecName: "kube-api-access-dm8dd") pod "85b0c991-af56-4d9e-8052-bf4d5fdc4669" (UID: "85b0c991-af56-4d9e-8052-bf4d5fdc4669"). InnerVolumeSpecName "kube-api-access-dm8dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.132825 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85b0c991-af56-4d9e-8052-bf4d5fdc4669-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85b0c991-af56-4d9e-8052-bf4d5fdc4669" (UID: "85b0c991-af56-4d9e-8052-bf4d5fdc4669"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.173082 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85b0c991-af56-4d9e-8052-bf4d5fdc4669-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.173118 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm8dd\" (UniqueName: \"kubernetes.io/projected/85b0c991-af56-4d9e-8052-bf4d5fdc4669-kube-api-access-dm8dd\") on node \"crc\" DevicePath \"\"" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.173129 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85b0c991-af56-4d9e-8052-bf4d5fdc4669-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.368135 4713 generic.go:334] "Generic (PLEG): container finished" podID="85b0c991-af56-4d9e-8052-bf4d5fdc4669" containerID="9a53a06cae2ead058ae5dc08352ead5c0ad50c1ae70b44ee7d2cf21cb870e7bd" exitCode=0 Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.368195 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4rrn" event={"ID":"85b0c991-af56-4d9e-8052-bf4d5fdc4669","Type":"ContainerDied","Data":"9a53a06cae2ead058ae5dc08352ead5c0ad50c1ae70b44ee7d2cf21cb870e7bd"} Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.369290 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4rrn" event={"ID":"85b0c991-af56-4d9e-8052-bf4d5fdc4669","Type":"ContainerDied","Data":"b62bc7b74994a67ac35fe7a47be5a198a07be4123ad907cde80aa1df4f046aa4"} Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.369396 4713 scope.go:117] "RemoveContainer" containerID="9a53a06cae2ead058ae5dc08352ead5c0ad50c1ae70b44ee7d2cf21cb870e7bd" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.368229 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4rrn" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.402954 4713 scope.go:117] "RemoveContainer" containerID="c499a90f2639f648d1b5dbe767fdfa7e5905975625c26d0dc575e6ea4044bf96" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.409895 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4rrn"] Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.422444 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c4rrn"] Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.433875 4713 scope.go:117] "RemoveContainer" containerID="d7acf6c2054398749f38618218fadc286c943e8532a0a783eb76bc8f469c6ea5" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.492391 4713 scope.go:117] "RemoveContainer" containerID="9a53a06cae2ead058ae5dc08352ead5c0ad50c1ae70b44ee7d2cf21cb870e7bd" Jan 26 16:22:13 crc kubenswrapper[4713]: E0126 16:22:13.492903 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a53a06cae2ead058ae5dc08352ead5c0ad50c1ae70b44ee7d2cf21cb870e7bd\": container with ID starting with 9a53a06cae2ead058ae5dc08352ead5c0ad50c1ae70b44ee7d2cf21cb870e7bd not found: ID does not exist" containerID="9a53a06cae2ead058ae5dc08352ead5c0ad50c1ae70b44ee7d2cf21cb870e7bd" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.492958 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a53a06cae2ead058ae5dc08352ead5c0ad50c1ae70b44ee7d2cf21cb870e7bd"} err="failed to get container status \"9a53a06cae2ead058ae5dc08352ead5c0ad50c1ae70b44ee7d2cf21cb870e7bd\": rpc error: code = NotFound desc = could not find container \"9a53a06cae2ead058ae5dc08352ead5c0ad50c1ae70b44ee7d2cf21cb870e7bd\": container with ID starting with 9a53a06cae2ead058ae5dc08352ead5c0ad50c1ae70b44ee7d2cf21cb870e7bd not found: ID does not exist" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.492991 4713 scope.go:117] "RemoveContainer" containerID="c499a90f2639f648d1b5dbe767fdfa7e5905975625c26d0dc575e6ea4044bf96" Jan 26 16:22:13 crc kubenswrapper[4713]: E0126 16:22:13.493335 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c499a90f2639f648d1b5dbe767fdfa7e5905975625c26d0dc575e6ea4044bf96\": container with ID starting with c499a90f2639f648d1b5dbe767fdfa7e5905975625c26d0dc575e6ea4044bf96 not found: ID does not exist" containerID="c499a90f2639f648d1b5dbe767fdfa7e5905975625c26d0dc575e6ea4044bf96" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.493398 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c499a90f2639f648d1b5dbe767fdfa7e5905975625c26d0dc575e6ea4044bf96"} err="failed to get container status \"c499a90f2639f648d1b5dbe767fdfa7e5905975625c26d0dc575e6ea4044bf96\": rpc error: code = NotFound desc = could not find container \"c499a90f2639f648d1b5dbe767fdfa7e5905975625c26d0dc575e6ea4044bf96\": container with ID starting with c499a90f2639f648d1b5dbe767fdfa7e5905975625c26d0dc575e6ea4044bf96 not found: ID does not exist" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.493425 4713 scope.go:117] "RemoveContainer" containerID="d7acf6c2054398749f38618218fadc286c943e8532a0a783eb76bc8f469c6ea5" Jan 26 16:22:13 crc kubenswrapper[4713]: E0126 16:22:13.493664 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7acf6c2054398749f38618218fadc286c943e8532a0a783eb76bc8f469c6ea5\": container with ID starting with d7acf6c2054398749f38618218fadc286c943e8532a0a783eb76bc8f469c6ea5 not found: ID does not exist" containerID="d7acf6c2054398749f38618218fadc286c943e8532a0a783eb76bc8f469c6ea5" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.493692 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7acf6c2054398749f38618218fadc286c943e8532a0a783eb76bc8f469c6ea5"} err="failed to get container status \"d7acf6c2054398749f38618218fadc286c943e8532a0a783eb76bc8f469c6ea5\": rpc error: code = NotFound desc = could not find container \"d7acf6c2054398749f38618218fadc286c943e8532a0a783eb76bc8f469c6ea5\": container with ID starting with d7acf6c2054398749f38618218fadc286c943e8532a0a783eb76bc8f469c6ea5 not found: ID does not exist" Jan 26 16:22:13 crc kubenswrapper[4713]: I0126 16:22:13.818518 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85b0c991-af56-4d9e-8052-bf4d5fdc4669" path="/var/lib/kubelet/pods/85b0c991-af56-4d9e-8052-bf4d5fdc4669/volumes" Jan 26 16:23:03 crc kubenswrapper[4713]: I0126 16:23:03.301650 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:23:03 crc kubenswrapper[4713]: I0126 16:23:03.302254 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:23:16 crc kubenswrapper[4713]: I0126 16:23:16.095179 4713 generic.go:334] "Generic (PLEG): container finished" podID="a4c0ccc6-3259-4551-be60-b8b5599884fa" containerID="1766800f1d760b5fb492cf7d595e726fc2cb2b07277872d0280283aee934796a" exitCode=0 Jan 26 16:23:16 crc kubenswrapper[4713]: I0126 16:23:16.095324 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" event={"ID":"a4c0ccc6-3259-4551-be60-b8b5599884fa","Type":"ContainerDied","Data":"1766800f1d760b5fb492cf7d595e726fc2cb2b07277872d0280283aee934796a"} Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.642218 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.782094 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-inventory\") pod \"a4c0ccc6-3259-4551-be60-b8b5599884fa\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.782159 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-1\") pod \"a4c0ccc6-3259-4551-be60-b8b5599884fa\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.782259 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-telemetry-combined-ca-bundle\") pod \"a4c0ccc6-3259-4551-be60-b8b5599884fa\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.782491 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-0\") pod \"a4c0ccc6-3259-4551-be60-b8b5599884fa\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.782603 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ssh-key-openstack-edpm-ipam\") pod \"a4c0ccc6-3259-4551-be60-b8b5599884fa\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.782748 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnxpp\" (UniqueName: \"kubernetes.io/projected/a4c0ccc6-3259-4551-be60-b8b5599884fa-kube-api-access-gnxpp\") pod \"a4c0ccc6-3259-4551-be60-b8b5599884fa\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.782824 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-2\") pod \"a4c0ccc6-3259-4551-be60-b8b5599884fa\" (UID: \"a4c0ccc6-3259-4551-be60-b8b5599884fa\") " Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.796747 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4c0ccc6-3259-4551-be60-b8b5599884fa-kube-api-access-gnxpp" (OuterVolumeSpecName: "kube-api-access-gnxpp") pod "a4c0ccc6-3259-4551-be60-b8b5599884fa" (UID: "a4c0ccc6-3259-4551-be60-b8b5599884fa"). InnerVolumeSpecName "kube-api-access-gnxpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.796858 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a4c0ccc6-3259-4551-be60-b8b5599884fa" (UID: "a4c0ccc6-3259-4551-be60-b8b5599884fa"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.812572 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "a4c0ccc6-3259-4551-be60-b8b5599884fa" (UID: "a4c0ccc6-3259-4551-be60-b8b5599884fa"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.821058 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "a4c0ccc6-3259-4551-be60-b8b5599884fa" (UID: "a4c0ccc6-3259-4551-be60-b8b5599884fa"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.821786 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "a4c0ccc6-3259-4551-be60-b8b5599884fa" (UID: "a4c0ccc6-3259-4551-be60-b8b5599884fa"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.845487 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-inventory" (OuterVolumeSpecName: "inventory") pod "a4c0ccc6-3259-4551-be60-b8b5599884fa" (UID: "a4c0ccc6-3259-4551-be60-b8b5599884fa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.860154 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a4c0ccc6-3259-4551-be60-b8b5599884fa" (UID: "a4c0ccc6-3259-4551-be60-b8b5599884fa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.886395 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnxpp\" (UniqueName: \"kubernetes.io/projected/a4c0ccc6-3259-4551-be60-b8b5599884fa-kube-api-access-gnxpp\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.886448 4713 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.886461 4713 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.886474 4713 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.886490 4713 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.886501 4713 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:17 crc kubenswrapper[4713]: I0126 16:23:17.886510 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4c0ccc6-3259-4551-be60-b8b5599884fa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:18 crc kubenswrapper[4713]: I0126 16:23:18.124832 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" event={"ID":"a4c0ccc6-3259-4551-be60-b8b5599884fa","Type":"ContainerDied","Data":"69ba7879c1b6c3ae3903b0ad25157a745b669091228633b13d06e26ab6e2c469"} Jan 26 16:23:18 crc kubenswrapper[4713]: I0126 16:23:18.125104 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69ba7879c1b6c3ae3903b0ad25157a745b669091228633b13d06e26ab6e2c469" Jan 26 16:23:18 crc kubenswrapper[4713]: I0126 16:23:18.124881 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b" Jan 26 16:23:33 crc kubenswrapper[4713]: I0126 16:23:33.301317 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:23:33 crc kubenswrapper[4713]: I0126 16:23:33.302032 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:24:03 crc kubenswrapper[4713]: I0126 16:24:03.301218 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:24:03 crc kubenswrapper[4713]: I0126 16:24:03.301874 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:24:03 crc kubenswrapper[4713]: I0126 16:24:03.301932 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 16:24:03 crc kubenswrapper[4713]: I0126 16:24:03.302868 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b"} pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:24:03 crc kubenswrapper[4713]: I0126 16:24:03.302943 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" containerID="cri-o://010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" gracePeriod=600 Jan 26 16:24:03 crc kubenswrapper[4713]: E0126 16:24:03.426773 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:24:03 crc kubenswrapper[4713]: I0126 16:24:03.619849 4713 generic.go:334] "Generic (PLEG): container finished" podID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" exitCode=0 Jan 26 16:24:03 crc kubenswrapper[4713]: I0126 16:24:03.619918 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerDied","Data":"010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b"} Jan 26 16:24:03 crc kubenswrapper[4713]: I0126 16:24:03.619964 4713 scope.go:117] "RemoveContainer" containerID="4d03474c3ce9ee80cf039013a706e0db548f7a66785997b0ed513ed768260d0f" Jan 26 16:24:03 crc kubenswrapper[4713]: I0126 16:24:03.620852 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:24:03 crc kubenswrapper[4713]: E0126 16:24:03.621272 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:24:16 crc kubenswrapper[4713]: I0126 16:24:16.803996 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:24:16 crc kubenswrapper[4713]: E0126 16:24:16.805198 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:24:29 crc kubenswrapper[4713]: I0126 16:24:29.804552 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:24:29 crc kubenswrapper[4713]: E0126 16:24:29.805409 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:24:42 crc kubenswrapper[4713]: I0126 16:24:42.805578 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:24:42 crc kubenswrapper[4713]: E0126 16:24:42.807518 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:24:48 crc kubenswrapper[4713]: E0126 16:24:48.895165 4713 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.194:49972->38.102.83.194:39293: write tcp 38.102.83.194:49972->38.102.83.194:39293: write: broken pipe Jan 26 16:24:54 crc kubenswrapper[4713]: E0126 16:24:54.931519 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 16:24:57 crc kubenswrapper[4713]: I0126 16:24:57.804077 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:24:57 crc kubenswrapper[4713]: E0126 16:24:57.805006 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.066476 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 16:25:00 crc kubenswrapper[4713]: E0126 16:25:00.067155 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c0ccc6-3259-4551-be60-b8b5599884fa" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.067169 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c0ccc6-3259-4551-be60-b8b5599884fa" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 16:25:00 crc kubenswrapper[4713]: E0126 16:25:00.067203 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b0c991-af56-4d9e-8052-bf4d5fdc4669" containerName="extract-content" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.067209 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b0c991-af56-4d9e-8052-bf4d5fdc4669" containerName="extract-content" Jan 26 16:25:00 crc kubenswrapper[4713]: E0126 16:25:00.067223 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b0c991-af56-4d9e-8052-bf4d5fdc4669" containerName="extract-utilities" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.067232 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b0c991-af56-4d9e-8052-bf4d5fdc4669" containerName="extract-utilities" Jan 26 16:25:00 crc kubenswrapper[4713]: E0126 16:25:00.067247 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b0c991-af56-4d9e-8052-bf4d5fdc4669" containerName="registry-server" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.067254 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b0c991-af56-4d9e-8052-bf4d5fdc4669" containerName="registry-server" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.067735 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b0c991-af56-4d9e-8052-bf4d5fdc4669" containerName="registry-server" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.067774 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c0ccc6-3259-4551-be60-b8b5599884fa" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.068715 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.072092 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.072386 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.072565 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ld8dg" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.072618 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.085610 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.164732 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jkl5\" (UniqueName: \"kubernetes.io/projected/b9ed8b20-616a-49b3-b0bb-ad86c228de84-kube-api-access-9jkl5\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.164796 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.164877 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b9ed8b20-616a-49b3-b0bb-ad86c228de84-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.164974 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b9ed8b20-616a-49b3-b0bb-ad86c228de84-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.165107 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.165185 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b9ed8b20-616a-49b3-b0bb-ad86c228de84-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.165348 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.165394 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9ed8b20-616a-49b3-b0bb-ad86c228de84-config-data\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.165415 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.266898 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.266971 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b9ed8b20-616a-49b3-b0bb-ad86c228de84-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.267090 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.267118 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9ed8b20-616a-49b3-b0bb-ad86c228de84-config-data\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.267146 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.267205 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jkl5\" (UniqueName: \"kubernetes.io/projected/b9ed8b20-616a-49b3-b0bb-ad86c228de84-kube-api-access-9jkl5\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.267243 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.267276 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b9ed8b20-616a-49b3-b0bb-ad86c228de84-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.267311 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b9ed8b20-616a-49b3-b0bb-ad86c228de84-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.270599 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.271897 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b9ed8b20-616a-49b3-b0bb-ad86c228de84-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.272659 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b9ed8b20-616a-49b3-b0bb-ad86c228de84-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.272989 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b9ed8b20-616a-49b3-b0bb-ad86c228de84-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.274111 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9ed8b20-616a-49b3-b0bb-ad86c228de84-config-data\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.276286 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.277874 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.287104 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.303476 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jkl5\" (UniqueName: \"kubernetes.io/projected/b9ed8b20-616a-49b3-b0bb-ad86c228de84-kube-api-access-9jkl5\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.320596 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.392568 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.876078 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 16:25:00 crc kubenswrapper[4713]: I0126 16:25:00.883987 4713 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:25:01 crc kubenswrapper[4713]: I0126 16:25:01.244264 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b9ed8b20-616a-49b3-b0bb-ad86c228de84","Type":"ContainerStarted","Data":"a32bb26731e1c6ac71792b3d08ef0e4129bedd2fdd8e440b8c37be1d0098ca1f"} Jan 26 16:25:05 crc kubenswrapper[4713]: E0126 16:25:05.295386 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 16:25:11 crc kubenswrapper[4713]: I0126 16:25:11.805676 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:25:11 crc kubenswrapper[4713]: E0126 16:25:11.806337 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:25:15 crc kubenswrapper[4713]: E0126 16:25:15.630041 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 16:25:24 crc kubenswrapper[4713]: I0126 16:25:24.804817 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:25:24 crc kubenswrapper[4713]: E0126 16:25:24.807545 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:25:25 crc kubenswrapper[4713]: E0126 16:25:25.910346 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 16:25:36 crc kubenswrapper[4713]: E0126 16:25:36.240926 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 16:25:36 crc kubenswrapper[4713]: I0126 16:25:36.804333 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:25:36 crc kubenswrapper[4713]: E0126 16:25:36.805023 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:25:39 crc kubenswrapper[4713]: E0126 16:25:39.210979 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 26 16:25:39 crc kubenswrapper[4713]: E0126 16:25:39.211509 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jkl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(b9ed8b20-616a-49b3-b0bb-ad86c228de84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:25:39 crc kubenswrapper[4713]: E0126 16:25:39.212735 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="b9ed8b20-616a-49b3-b0bb-ad86c228de84" Jan 26 16:25:39 crc kubenswrapper[4713]: E0126 16:25:39.662176 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="b9ed8b20-616a-49b3-b0bb-ad86c228de84" Jan 26 16:25:46 crc kubenswrapper[4713]: I0126 16:25:46.174829 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m6xcb"] Jan 26 16:25:46 crc kubenswrapper[4713]: I0126 16:25:46.177964 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:46 crc kubenswrapper[4713]: I0126 16:25:46.195072 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6xcb"] Jan 26 16:25:46 crc kubenswrapper[4713]: I0126 16:25:46.335049 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77785795-b5c3-46e6-8db1-ea24985b68ea-catalog-content\") pod \"certified-operators-m6xcb\" (UID: \"77785795-b5c3-46e6-8db1-ea24985b68ea\") " pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:46 crc kubenswrapper[4713]: I0126 16:25:46.335635 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-684p5\" (UniqueName: \"kubernetes.io/projected/77785795-b5c3-46e6-8db1-ea24985b68ea-kube-api-access-684p5\") pod \"certified-operators-m6xcb\" (UID: \"77785795-b5c3-46e6-8db1-ea24985b68ea\") " pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:46 crc kubenswrapper[4713]: I0126 16:25:46.335707 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77785795-b5c3-46e6-8db1-ea24985b68ea-utilities\") pod \"certified-operators-m6xcb\" (UID: \"77785795-b5c3-46e6-8db1-ea24985b68ea\") " pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:46 crc kubenswrapper[4713]: I0126 16:25:46.437250 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77785795-b5c3-46e6-8db1-ea24985b68ea-utilities\") pod \"certified-operators-m6xcb\" (UID: \"77785795-b5c3-46e6-8db1-ea24985b68ea\") " pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:46 crc kubenswrapper[4713]: I0126 16:25:46.437304 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-684p5\" (UniqueName: \"kubernetes.io/projected/77785795-b5c3-46e6-8db1-ea24985b68ea-kube-api-access-684p5\") pod \"certified-operators-m6xcb\" (UID: \"77785795-b5c3-46e6-8db1-ea24985b68ea\") " pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:46 crc kubenswrapper[4713]: I0126 16:25:46.437462 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77785795-b5c3-46e6-8db1-ea24985b68ea-catalog-content\") pod \"certified-operators-m6xcb\" (UID: \"77785795-b5c3-46e6-8db1-ea24985b68ea\") " pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:46 crc kubenswrapper[4713]: I0126 16:25:46.437872 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77785795-b5c3-46e6-8db1-ea24985b68ea-utilities\") pod \"certified-operators-m6xcb\" (UID: \"77785795-b5c3-46e6-8db1-ea24985b68ea\") " pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:46 crc kubenswrapper[4713]: I0126 16:25:46.437943 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77785795-b5c3-46e6-8db1-ea24985b68ea-catalog-content\") pod \"certified-operators-m6xcb\" (UID: \"77785795-b5c3-46e6-8db1-ea24985b68ea\") " pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:46 crc kubenswrapper[4713]: I0126 16:25:46.462072 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-684p5\" (UniqueName: \"kubernetes.io/projected/77785795-b5c3-46e6-8db1-ea24985b68ea-kube-api-access-684p5\") pod \"certified-operators-m6xcb\" (UID: \"77785795-b5c3-46e6-8db1-ea24985b68ea\") " pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:46 crc kubenswrapper[4713]: I0126 16:25:46.507947 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:47 crc kubenswrapper[4713]: I0126 16:25:47.122960 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6xcb"] Jan 26 16:25:47 crc kubenswrapper[4713]: I0126 16:25:47.740504 4713 generic.go:334] "Generic (PLEG): container finished" podID="77785795-b5c3-46e6-8db1-ea24985b68ea" containerID="a750d76c2db58746e6789446824bcc146a824999646a1b1ffcc49037c468d691" exitCode=0 Jan 26 16:25:47 crc kubenswrapper[4713]: I0126 16:25:47.740613 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6xcb" event={"ID":"77785795-b5c3-46e6-8db1-ea24985b68ea","Type":"ContainerDied","Data":"a750d76c2db58746e6789446824bcc146a824999646a1b1ffcc49037c468d691"} Jan 26 16:25:47 crc kubenswrapper[4713]: I0126 16:25:47.740812 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6xcb" event={"ID":"77785795-b5c3-46e6-8db1-ea24985b68ea","Type":"ContainerStarted","Data":"c46b076cd75e9a3e914f35b37a65708304978c217f1e3ccbc4edac90f05e9c5b"} Jan 26 16:25:48 crc kubenswrapper[4713]: I0126 16:25:48.804345 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:25:48 crc kubenswrapper[4713]: E0126 16:25:48.805032 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:25:49 crc kubenswrapper[4713]: I0126 16:25:49.764132 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6xcb" event={"ID":"77785795-b5c3-46e6-8db1-ea24985b68ea","Type":"ContainerStarted","Data":"41fd669ec75bd4c6bd4b77034fe694b5cb516a84868fc5838ce2226ab302839a"} Jan 26 16:25:50 crc kubenswrapper[4713]: I0126 16:25:50.793428 4713 generic.go:334] "Generic (PLEG): container finished" podID="77785795-b5c3-46e6-8db1-ea24985b68ea" containerID="41fd669ec75bd4c6bd4b77034fe694b5cb516a84868fc5838ce2226ab302839a" exitCode=0 Jan 26 16:25:50 crc kubenswrapper[4713]: I0126 16:25:50.793884 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6xcb" event={"ID":"77785795-b5c3-46e6-8db1-ea24985b68ea","Type":"ContainerDied","Data":"41fd669ec75bd4c6bd4b77034fe694b5cb516a84868fc5838ce2226ab302839a"} Jan 26 16:25:51 crc kubenswrapper[4713]: I0126 16:25:51.588785 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 16:25:52 crc kubenswrapper[4713]: I0126 16:25:52.813918 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6xcb" event={"ID":"77785795-b5c3-46e6-8db1-ea24985b68ea","Type":"ContainerStarted","Data":"4e2128c5c9440462c39b50fa3e7fd03ad45ad7c767646aaa92459d615fc03c82"} Jan 26 16:25:52 crc kubenswrapper[4713]: I0126 16:25:52.839772 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m6xcb" podStartSLOduration=2.999600629 podStartE2EDuration="6.839751087s" podCreationTimestamp="2026-01-26 16:25:46 +0000 UTC" firstStartedPulling="2026-01-26 16:25:47.743169151 +0000 UTC m=+3122.880186386" lastFinishedPulling="2026-01-26 16:25:51.583319599 +0000 UTC m=+3126.720336844" observedRunningTime="2026-01-26 16:25:52.833240146 +0000 UTC m=+3127.970257381" watchObservedRunningTime="2026-01-26 16:25:52.839751087 +0000 UTC m=+3127.976768322" Jan 26 16:25:53 crc kubenswrapper[4713]: I0126 16:25:53.826623 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b9ed8b20-616a-49b3-b0bb-ad86c228de84","Type":"ContainerStarted","Data":"3aa5efeca6cbf79cfe683bea216dee1794f29947a762975f9fc46d223099a198"} Jan 26 16:25:53 crc kubenswrapper[4713]: I0126 16:25:53.846786 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.144198465 podStartE2EDuration="54.846765681s" podCreationTimestamp="2026-01-26 16:24:59 +0000 UTC" firstStartedPulling="2026-01-26 16:25:00.883555041 +0000 UTC m=+3076.020572316" lastFinishedPulling="2026-01-26 16:25:51.586122287 +0000 UTC m=+3126.723139532" observedRunningTime="2026-01-26 16:25:53.844329523 +0000 UTC m=+3128.981346768" watchObservedRunningTime="2026-01-26 16:25:53.846765681 +0000 UTC m=+3128.983782926" Jan 26 16:25:56 crc kubenswrapper[4713]: I0126 16:25:56.508990 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:56 crc kubenswrapper[4713]: I0126 16:25:56.510326 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:56 crc kubenswrapper[4713]: I0126 16:25:56.559066 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:56 crc kubenswrapper[4713]: I0126 16:25:56.912198 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:56 crc kubenswrapper[4713]: I0126 16:25:56.957494 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m6xcb"] Jan 26 16:25:58 crc kubenswrapper[4713]: I0126 16:25:58.877674 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m6xcb" podUID="77785795-b5c3-46e6-8db1-ea24985b68ea" containerName="registry-server" containerID="cri-o://4e2128c5c9440462c39b50fa3e7fd03ad45ad7c767646aaa92459d615fc03c82" gracePeriod=2 Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.400453 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.446874 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77785795-b5c3-46e6-8db1-ea24985b68ea-utilities\") pod \"77785795-b5c3-46e6-8db1-ea24985b68ea\" (UID: \"77785795-b5c3-46e6-8db1-ea24985b68ea\") " Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.447100 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77785795-b5c3-46e6-8db1-ea24985b68ea-catalog-content\") pod \"77785795-b5c3-46e6-8db1-ea24985b68ea\" (UID: \"77785795-b5c3-46e6-8db1-ea24985b68ea\") " Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.447171 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-684p5\" (UniqueName: \"kubernetes.io/projected/77785795-b5c3-46e6-8db1-ea24985b68ea-kube-api-access-684p5\") pod \"77785795-b5c3-46e6-8db1-ea24985b68ea\" (UID: \"77785795-b5c3-46e6-8db1-ea24985b68ea\") " Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.448275 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77785795-b5c3-46e6-8db1-ea24985b68ea-utilities" (OuterVolumeSpecName: "utilities") pod "77785795-b5c3-46e6-8db1-ea24985b68ea" (UID: "77785795-b5c3-46e6-8db1-ea24985b68ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.454881 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77785795-b5c3-46e6-8db1-ea24985b68ea-kube-api-access-684p5" (OuterVolumeSpecName: "kube-api-access-684p5") pod "77785795-b5c3-46e6-8db1-ea24985b68ea" (UID: "77785795-b5c3-46e6-8db1-ea24985b68ea"). InnerVolumeSpecName "kube-api-access-684p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.492179 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77785795-b5c3-46e6-8db1-ea24985b68ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77785795-b5c3-46e6-8db1-ea24985b68ea" (UID: "77785795-b5c3-46e6-8db1-ea24985b68ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.549468 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77785795-b5c3-46e6-8db1-ea24985b68ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.549501 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77785795-b5c3-46e6-8db1-ea24985b68ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.549512 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-684p5\" (UniqueName: \"kubernetes.io/projected/77785795-b5c3-46e6-8db1-ea24985b68ea-kube-api-access-684p5\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.889249 4713 generic.go:334] "Generic (PLEG): container finished" podID="77785795-b5c3-46e6-8db1-ea24985b68ea" containerID="4e2128c5c9440462c39b50fa3e7fd03ad45ad7c767646aaa92459d615fc03c82" exitCode=0 Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.889300 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6xcb" event={"ID":"77785795-b5c3-46e6-8db1-ea24985b68ea","Type":"ContainerDied","Data":"4e2128c5c9440462c39b50fa3e7fd03ad45ad7c767646aaa92459d615fc03c82"} Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.889331 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6xcb" event={"ID":"77785795-b5c3-46e6-8db1-ea24985b68ea","Type":"ContainerDied","Data":"c46b076cd75e9a3e914f35b37a65708304978c217f1e3ccbc4edac90f05e9c5b"} Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.889350 4713 scope.go:117] "RemoveContainer" containerID="4e2128c5c9440462c39b50fa3e7fd03ad45ad7c767646aaa92459d615fc03c82" Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.889600 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6xcb" Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.916981 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m6xcb"] Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.926191 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m6xcb"] Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.927875 4713 scope.go:117] "RemoveContainer" containerID="41fd669ec75bd4c6bd4b77034fe694b5cb516a84868fc5838ce2226ab302839a" Jan 26 16:25:59 crc kubenswrapper[4713]: I0126 16:25:59.951160 4713 scope.go:117] "RemoveContainer" containerID="a750d76c2db58746e6789446824bcc146a824999646a1b1ffcc49037c468d691" Jan 26 16:26:00 crc kubenswrapper[4713]: I0126 16:26:00.003378 4713 scope.go:117] "RemoveContainer" containerID="4e2128c5c9440462c39b50fa3e7fd03ad45ad7c767646aaa92459d615fc03c82" Jan 26 16:26:00 crc kubenswrapper[4713]: E0126 16:26:00.004216 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2128c5c9440462c39b50fa3e7fd03ad45ad7c767646aaa92459d615fc03c82\": container with ID starting with 4e2128c5c9440462c39b50fa3e7fd03ad45ad7c767646aaa92459d615fc03c82 not found: ID does not exist" containerID="4e2128c5c9440462c39b50fa3e7fd03ad45ad7c767646aaa92459d615fc03c82" Jan 26 16:26:00 crc kubenswrapper[4713]: I0126 16:26:00.004244 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2128c5c9440462c39b50fa3e7fd03ad45ad7c767646aaa92459d615fc03c82"} err="failed to get container status \"4e2128c5c9440462c39b50fa3e7fd03ad45ad7c767646aaa92459d615fc03c82\": rpc error: code = NotFound desc = could not find container \"4e2128c5c9440462c39b50fa3e7fd03ad45ad7c767646aaa92459d615fc03c82\": container with ID starting with 4e2128c5c9440462c39b50fa3e7fd03ad45ad7c767646aaa92459d615fc03c82 not found: ID does not exist" Jan 26 16:26:00 crc kubenswrapper[4713]: I0126 16:26:00.004266 4713 scope.go:117] "RemoveContainer" containerID="41fd669ec75bd4c6bd4b77034fe694b5cb516a84868fc5838ce2226ab302839a" Jan 26 16:26:00 crc kubenswrapper[4713]: E0126 16:26:00.004714 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41fd669ec75bd4c6bd4b77034fe694b5cb516a84868fc5838ce2226ab302839a\": container with ID starting with 41fd669ec75bd4c6bd4b77034fe694b5cb516a84868fc5838ce2226ab302839a not found: ID does not exist" containerID="41fd669ec75bd4c6bd4b77034fe694b5cb516a84868fc5838ce2226ab302839a" Jan 26 16:26:00 crc kubenswrapper[4713]: I0126 16:26:00.004759 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41fd669ec75bd4c6bd4b77034fe694b5cb516a84868fc5838ce2226ab302839a"} err="failed to get container status \"41fd669ec75bd4c6bd4b77034fe694b5cb516a84868fc5838ce2226ab302839a\": rpc error: code = NotFound desc = could not find container \"41fd669ec75bd4c6bd4b77034fe694b5cb516a84868fc5838ce2226ab302839a\": container with ID starting with 41fd669ec75bd4c6bd4b77034fe694b5cb516a84868fc5838ce2226ab302839a not found: ID does not exist" Jan 26 16:26:00 crc kubenswrapper[4713]: I0126 16:26:00.004805 4713 scope.go:117] "RemoveContainer" containerID="a750d76c2db58746e6789446824bcc146a824999646a1b1ffcc49037c468d691" Jan 26 16:26:00 crc kubenswrapper[4713]: E0126 16:26:00.005203 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a750d76c2db58746e6789446824bcc146a824999646a1b1ffcc49037c468d691\": container with ID starting with a750d76c2db58746e6789446824bcc146a824999646a1b1ffcc49037c468d691 not found: ID does not exist" containerID="a750d76c2db58746e6789446824bcc146a824999646a1b1ffcc49037c468d691" Jan 26 16:26:00 crc kubenswrapper[4713]: I0126 16:26:00.005242 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a750d76c2db58746e6789446824bcc146a824999646a1b1ffcc49037c468d691"} err="failed to get container status \"a750d76c2db58746e6789446824bcc146a824999646a1b1ffcc49037c468d691\": rpc error: code = NotFound desc = could not find container \"a750d76c2db58746e6789446824bcc146a824999646a1b1ffcc49037c468d691\": container with ID starting with a750d76c2db58746e6789446824bcc146a824999646a1b1ffcc49037c468d691 not found: ID does not exist" Jan 26 16:26:01 crc kubenswrapper[4713]: I0126 16:26:01.822524 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77785795-b5c3-46e6-8db1-ea24985b68ea" path="/var/lib/kubelet/pods/77785795-b5c3-46e6-8db1-ea24985b68ea/volumes" Jan 26 16:26:03 crc kubenswrapper[4713]: I0126 16:26:03.803678 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:26:03 crc kubenswrapper[4713]: E0126 16:26:03.804326 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:26:14 crc kubenswrapper[4713]: I0126 16:26:14.803402 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:26:14 crc kubenswrapper[4713]: E0126 16:26:14.804184 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:26:25 crc kubenswrapper[4713]: I0126 16:26:25.811042 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:26:25 crc kubenswrapper[4713]: E0126 16:26:25.812919 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:26:39 crc kubenswrapper[4713]: I0126 16:26:39.804420 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:26:39 crc kubenswrapper[4713]: E0126 16:26:39.805177 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:26:50 crc kubenswrapper[4713]: I0126 16:26:50.803731 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:26:50 crc kubenswrapper[4713]: E0126 16:26:50.805474 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:27:04 crc kubenswrapper[4713]: I0126 16:27:04.803495 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:27:04 crc kubenswrapper[4713]: E0126 16:27:04.804296 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:27:15 crc kubenswrapper[4713]: I0126 16:27:15.814699 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:27:15 crc kubenswrapper[4713]: E0126 16:27:15.815316 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:27:29 crc kubenswrapper[4713]: I0126 16:27:29.804017 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:27:29 crc kubenswrapper[4713]: E0126 16:27:29.804795 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:27:43 crc kubenswrapper[4713]: I0126 16:27:43.803835 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:27:43 crc kubenswrapper[4713]: E0126 16:27:43.804588 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:27:55 crc kubenswrapper[4713]: I0126 16:27:55.814591 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:27:55 crc kubenswrapper[4713]: E0126 16:27:55.815881 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:28:08 crc kubenswrapper[4713]: I0126 16:28:08.804010 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:28:08 crc kubenswrapper[4713]: E0126 16:28:08.804744 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:28:19 crc kubenswrapper[4713]: I0126 16:28:19.803693 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:28:19 crc kubenswrapper[4713]: E0126 16:28:19.804720 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:28:30 crc kubenswrapper[4713]: I0126 16:28:30.803312 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:28:30 crc kubenswrapper[4713]: E0126 16:28:30.804124 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:28:45 crc kubenswrapper[4713]: I0126 16:28:45.809599 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:28:45 crc kubenswrapper[4713]: E0126 16:28:45.810558 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:28:56 crc kubenswrapper[4713]: I0126 16:28:56.803049 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:28:56 crc kubenswrapper[4713]: E0126 16:28:56.803728 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:29:07 crc kubenswrapper[4713]: I0126 16:29:07.803818 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:29:08 crc kubenswrapper[4713]: I0126 16:29:08.834594 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"c312980dd2a4333984ac586a57b2840623b9ab4a72d766eeb1ce1d72aca22abb"} Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.461121 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rtfxn"] Jan 26 16:29:43 crc kubenswrapper[4713]: E0126 16:29:43.462246 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77785795-b5c3-46e6-8db1-ea24985b68ea" containerName="extract-utilities" Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.462263 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="77785795-b5c3-46e6-8db1-ea24985b68ea" containerName="extract-utilities" Jan 26 16:29:43 crc kubenswrapper[4713]: E0126 16:29:43.462279 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77785795-b5c3-46e6-8db1-ea24985b68ea" containerName="extract-content" Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.462286 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="77785795-b5c3-46e6-8db1-ea24985b68ea" containerName="extract-content" Jan 26 16:29:43 crc kubenswrapper[4713]: E0126 16:29:43.462316 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77785795-b5c3-46e6-8db1-ea24985b68ea" containerName="registry-server" Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.462324 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="77785795-b5c3-46e6-8db1-ea24985b68ea" containerName="registry-server" Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.462617 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="77785795-b5c3-46e6-8db1-ea24985b68ea" containerName="registry-server" Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.464488 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.469930 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rtfxn"] Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.539670 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7750107e-4fca-41f0-ae40-e61d36d688a7-utilities\") pod \"redhat-operators-rtfxn\" (UID: \"7750107e-4fca-41f0-ae40-e61d36d688a7\") " pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.539864 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7750107e-4fca-41f0-ae40-e61d36d688a7-catalog-content\") pod \"redhat-operators-rtfxn\" (UID: \"7750107e-4fca-41f0-ae40-e61d36d688a7\") " pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.539921 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crw5m\" (UniqueName: \"kubernetes.io/projected/7750107e-4fca-41f0-ae40-e61d36d688a7-kube-api-access-crw5m\") pod \"redhat-operators-rtfxn\" (UID: \"7750107e-4fca-41f0-ae40-e61d36d688a7\") " pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.641532 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7750107e-4fca-41f0-ae40-e61d36d688a7-catalog-content\") pod \"redhat-operators-rtfxn\" (UID: \"7750107e-4fca-41f0-ae40-e61d36d688a7\") " pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.641602 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crw5m\" (UniqueName: \"kubernetes.io/projected/7750107e-4fca-41f0-ae40-e61d36d688a7-kube-api-access-crw5m\") pod \"redhat-operators-rtfxn\" (UID: \"7750107e-4fca-41f0-ae40-e61d36d688a7\") " pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.641684 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7750107e-4fca-41f0-ae40-e61d36d688a7-utilities\") pod \"redhat-operators-rtfxn\" (UID: \"7750107e-4fca-41f0-ae40-e61d36d688a7\") " pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.642518 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7750107e-4fca-41f0-ae40-e61d36d688a7-utilities\") pod \"redhat-operators-rtfxn\" (UID: \"7750107e-4fca-41f0-ae40-e61d36d688a7\") " pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.642633 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7750107e-4fca-41f0-ae40-e61d36d688a7-catalog-content\") pod \"redhat-operators-rtfxn\" (UID: \"7750107e-4fca-41f0-ae40-e61d36d688a7\") " pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.668480 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crw5m\" (UniqueName: \"kubernetes.io/projected/7750107e-4fca-41f0-ae40-e61d36d688a7-kube-api-access-crw5m\") pod \"redhat-operators-rtfxn\" (UID: \"7750107e-4fca-41f0-ae40-e61d36d688a7\") " pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:29:43 crc kubenswrapper[4713]: I0126 16:29:43.790615 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:29:44 crc kubenswrapper[4713]: W0126 16:29:44.370054 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7750107e_4fca_41f0_ae40_e61d36d688a7.slice/crio-7fd6cb1b740dad3c5e1b5b4002155c8309d959292887a8621f1f531b9b703c9b WatchSource:0}: Error finding container 7fd6cb1b740dad3c5e1b5b4002155c8309d959292887a8621f1f531b9b703c9b: Status 404 returned error can't find the container with id 7fd6cb1b740dad3c5e1b5b4002155c8309d959292887a8621f1f531b9b703c9b Jan 26 16:29:44 crc kubenswrapper[4713]: I0126 16:29:44.386944 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rtfxn"] Jan 26 16:29:45 crc kubenswrapper[4713]: I0126 16:29:45.209727 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtfxn" event={"ID":"7750107e-4fca-41f0-ae40-e61d36d688a7","Type":"ContainerStarted","Data":"7c561138bfdf226c40b5598a3dab40019d526fd66cd194feedeacc1fa9871731"} Jan 26 16:29:45 crc kubenswrapper[4713]: I0126 16:29:45.210336 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtfxn" event={"ID":"7750107e-4fca-41f0-ae40-e61d36d688a7","Type":"ContainerStarted","Data":"7fd6cb1b740dad3c5e1b5b4002155c8309d959292887a8621f1f531b9b703c9b"} Jan 26 16:29:46 crc kubenswrapper[4713]: I0126 16:29:46.222203 4713 generic.go:334] "Generic (PLEG): container finished" podID="7750107e-4fca-41f0-ae40-e61d36d688a7" containerID="7c561138bfdf226c40b5598a3dab40019d526fd66cd194feedeacc1fa9871731" exitCode=0 Jan 26 16:29:46 crc kubenswrapper[4713]: I0126 16:29:46.222386 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtfxn" event={"ID":"7750107e-4fca-41f0-ae40-e61d36d688a7","Type":"ContainerDied","Data":"7c561138bfdf226c40b5598a3dab40019d526fd66cd194feedeacc1fa9871731"} Jan 26 16:29:47 crc kubenswrapper[4713]: I0126 16:29:47.234938 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtfxn" event={"ID":"7750107e-4fca-41f0-ae40-e61d36d688a7","Type":"ContainerStarted","Data":"41b5bc383f08d2527dc5ab449cbffc97b0a536b69797ae89d756570792481ef9"} Jan 26 16:29:51 crc kubenswrapper[4713]: I0126 16:29:51.281688 4713 generic.go:334] "Generic (PLEG): container finished" podID="7750107e-4fca-41f0-ae40-e61d36d688a7" containerID="41b5bc383f08d2527dc5ab449cbffc97b0a536b69797ae89d756570792481ef9" exitCode=0 Jan 26 16:29:51 crc kubenswrapper[4713]: I0126 16:29:51.281850 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtfxn" event={"ID":"7750107e-4fca-41f0-ae40-e61d36d688a7","Type":"ContainerDied","Data":"41b5bc383f08d2527dc5ab449cbffc97b0a536b69797ae89d756570792481ef9"} Jan 26 16:29:56 crc kubenswrapper[4713]: I0126 16:29:56.331490 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtfxn" event={"ID":"7750107e-4fca-41f0-ae40-e61d36d688a7","Type":"ContainerStarted","Data":"421e3ee29b170708af02193cd2dec5e03ed8621c40312f9f8b841beb22c042e6"} Jan 26 16:29:56 crc kubenswrapper[4713]: I0126 16:29:56.360465 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rtfxn" podStartSLOduration=4.08178685 podStartE2EDuration="13.36044472s" podCreationTimestamp="2026-01-26 16:29:43 +0000 UTC" firstStartedPulling="2026-01-26 16:29:46.226573808 +0000 UTC m=+3361.363591043" lastFinishedPulling="2026-01-26 16:29:55.505231678 +0000 UTC m=+3370.642248913" observedRunningTime="2026-01-26 16:29:56.355856419 +0000 UTC m=+3371.492873654" watchObservedRunningTime="2026-01-26 16:29:56.36044472 +0000 UTC m=+3371.497461955" Jan 26 16:30:00 crc kubenswrapper[4713]: I0126 16:30:00.155961 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6"] Jan 26 16:30:00 crc kubenswrapper[4713]: I0126 16:30:00.158228 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" Jan 26 16:30:00 crc kubenswrapper[4713]: I0126 16:30:00.161831 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:30:00 crc kubenswrapper[4713]: I0126 16:30:00.162179 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:30:00 crc kubenswrapper[4713]: I0126 16:30:00.168621 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6"] Jan 26 16:30:00 crc kubenswrapper[4713]: I0126 16:30:00.287424 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/246b0843-8505-46ec-9019-8b40ef113608-config-volume\") pod \"collect-profiles-29490750-5hrd6\" (UID: \"246b0843-8505-46ec-9019-8b40ef113608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" Jan 26 16:30:00 crc kubenswrapper[4713]: I0126 16:30:00.289242 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/246b0843-8505-46ec-9019-8b40ef113608-secret-volume\") pod \"collect-profiles-29490750-5hrd6\" (UID: \"246b0843-8505-46ec-9019-8b40ef113608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" Jan 26 16:30:00 crc kubenswrapper[4713]: I0126 16:30:00.289314 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p2rs\" (UniqueName: \"kubernetes.io/projected/246b0843-8505-46ec-9019-8b40ef113608-kube-api-access-4p2rs\") pod \"collect-profiles-29490750-5hrd6\" (UID: \"246b0843-8505-46ec-9019-8b40ef113608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" Jan 26 16:30:00 crc kubenswrapper[4713]: I0126 16:30:00.391031 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/246b0843-8505-46ec-9019-8b40ef113608-secret-volume\") pod \"collect-profiles-29490750-5hrd6\" (UID: \"246b0843-8505-46ec-9019-8b40ef113608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" Jan 26 16:30:00 crc kubenswrapper[4713]: I0126 16:30:00.391098 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p2rs\" (UniqueName: \"kubernetes.io/projected/246b0843-8505-46ec-9019-8b40ef113608-kube-api-access-4p2rs\") pod \"collect-profiles-29490750-5hrd6\" (UID: \"246b0843-8505-46ec-9019-8b40ef113608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" Jan 26 16:30:00 crc kubenswrapper[4713]: I0126 16:30:00.391220 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/246b0843-8505-46ec-9019-8b40ef113608-config-volume\") pod \"collect-profiles-29490750-5hrd6\" (UID: \"246b0843-8505-46ec-9019-8b40ef113608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" Jan 26 16:30:00 crc kubenswrapper[4713]: I0126 16:30:00.392487 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/246b0843-8505-46ec-9019-8b40ef113608-config-volume\") pod \"collect-profiles-29490750-5hrd6\" (UID: \"246b0843-8505-46ec-9019-8b40ef113608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" Jan 26 16:30:00 crc kubenswrapper[4713]: I0126 16:30:00.396984 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/246b0843-8505-46ec-9019-8b40ef113608-secret-volume\") pod \"collect-profiles-29490750-5hrd6\" (UID: \"246b0843-8505-46ec-9019-8b40ef113608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" Jan 26 16:30:00 crc kubenswrapper[4713]: I0126 16:30:00.414654 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p2rs\" (UniqueName: \"kubernetes.io/projected/246b0843-8505-46ec-9019-8b40ef113608-kube-api-access-4p2rs\") pod \"collect-profiles-29490750-5hrd6\" (UID: \"246b0843-8505-46ec-9019-8b40ef113608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" Jan 26 16:30:00 crc kubenswrapper[4713]: I0126 16:30:00.493332 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" Jan 26 16:30:01 crc kubenswrapper[4713]: I0126 16:30:01.563134 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6"] Jan 26 16:30:02 crc kubenswrapper[4713]: I0126 16:30:02.399470 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" event={"ID":"246b0843-8505-46ec-9019-8b40ef113608","Type":"ContainerStarted","Data":"181a835f9cc4669cca17166e0311c950b2a7e46cc6f0154b2fda7618e8121bae"} Jan 26 16:30:03 crc kubenswrapper[4713]: I0126 16:30:03.410170 4713 generic.go:334] "Generic (PLEG): container finished" podID="246b0843-8505-46ec-9019-8b40ef113608" containerID="1263167a09cc325c7461e018d03e06873a1820e727d89f1ce0b8a8c47321c75a" exitCode=0 Jan 26 16:30:03 crc kubenswrapper[4713]: I0126 16:30:03.410483 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" event={"ID":"246b0843-8505-46ec-9019-8b40ef113608","Type":"ContainerDied","Data":"1263167a09cc325c7461e018d03e06873a1820e727d89f1ce0b8a8c47321c75a"} Jan 26 16:30:03 crc kubenswrapper[4713]: I0126 16:30:03.790785 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:30:03 crc kubenswrapper[4713]: I0126 16:30:03.791059 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:30:03 crc kubenswrapper[4713]: I0126 16:30:03.861172 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:30:04 crc kubenswrapper[4713]: I0126 16:30:04.486415 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:30:04 crc kubenswrapper[4713]: I0126 16:30:04.563867 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rtfxn"] Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:05.063037 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:05.196450 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/246b0843-8505-46ec-9019-8b40ef113608-config-volume\") pod \"246b0843-8505-46ec-9019-8b40ef113608\" (UID: \"246b0843-8505-46ec-9019-8b40ef113608\") " Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:05.196622 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/246b0843-8505-46ec-9019-8b40ef113608-secret-volume\") pod \"246b0843-8505-46ec-9019-8b40ef113608\" (UID: \"246b0843-8505-46ec-9019-8b40ef113608\") " Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:05.196668 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p2rs\" (UniqueName: \"kubernetes.io/projected/246b0843-8505-46ec-9019-8b40ef113608-kube-api-access-4p2rs\") pod \"246b0843-8505-46ec-9019-8b40ef113608\" (UID: \"246b0843-8505-46ec-9019-8b40ef113608\") " Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:05.197037 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/246b0843-8505-46ec-9019-8b40ef113608-config-volume" (OuterVolumeSpecName: "config-volume") pod "246b0843-8505-46ec-9019-8b40ef113608" (UID: "246b0843-8505-46ec-9019-8b40ef113608"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:05.197587 4713 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/246b0843-8505-46ec-9019-8b40ef113608-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:05.241945 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/246b0843-8505-46ec-9019-8b40ef113608-kube-api-access-4p2rs" (OuterVolumeSpecName: "kube-api-access-4p2rs") pod "246b0843-8505-46ec-9019-8b40ef113608" (UID: "246b0843-8505-46ec-9019-8b40ef113608"). InnerVolumeSpecName "kube-api-access-4p2rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:05.244948 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/246b0843-8505-46ec-9019-8b40ef113608-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "246b0843-8505-46ec-9019-8b40ef113608" (UID: "246b0843-8505-46ec-9019-8b40ef113608"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:05.302688 4713 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/246b0843-8505-46ec-9019-8b40ef113608-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:05.302716 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p2rs\" (UniqueName: \"kubernetes.io/projected/246b0843-8505-46ec-9019-8b40ef113608-kube-api-access-4p2rs\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:05.436372 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:05.436390 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-5hrd6" event={"ID":"246b0843-8505-46ec-9019-8b40ef113608","Type":"ContainerDied","Data":"181a835f9cc4669cca17166e0311c950b2a7e46cc6f0154b2fda7618e8121bae"} Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:05.436788 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="181a835f9cc4669cca17166e0311c950b2a7e46cc6f0154b2fda7618e8121bae" Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:06.152801 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8"] Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:06.162071 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-7s7r8"] Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:06.444942 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rtfxn" podUID="7750107e-4fca-41f0-ae40-e61d36d688a7" containerName="registry-server" containerID="cri-o://421e3ee29b170708af02193cd2dec5e03ed8621c40312f9f8b841beb22c042e6" gracePeriod=2 Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:07.835086 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a6a4266-769b-46d9-b5f2-6873207578ba" path="/var/lib/kubelet/pods/8a6a4266-769b-46d9-b5f2-6873207578ba/volumes" Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:10.486962 4713 generic.go:334] "Generic (PLEG): container finished" podID="7750107e-4fca-41f0-ae40-e61d36d688a7" containerID="421e3ee29b170708af02193cd2dec5e03ed8621c40312f9f8b841beb22c042e6" exitCode=0 Jan 26 16:30:17 crc kubenswrapper[4713]: I0126 16:30:10.487083 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtfxn" event={"ID":"7750107e-4fca-41f0-ae40-e61d36d688a7","Type":"ContainerDied","Data":"421e3ee29b170708af02193cd2dec5e03ed8621c40312f9f8b841beb22c042e6"} Jan 26 16:30:17 crc kubenswrapper[4713]: E0126 16:30:13.791523 4713 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 421e3ee29b170708af02193cd2dec5e03ed8621c40312f9f8b841beb22c042e6 is running failed: container process not found" containerID="421e3ee29b170708af02193cd2dec5e03ed8621c40312f9f8b841beb22c042e6" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 16:30:17 crc kubenswrapper[4713]: E0126 16:30:13.792735 4713 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 421e3ee29b170708af02193cd2dec5e03ed8621c40312f9f8b841beb22c042e6 is running failed: container process not found" containerID="421e3ee29b170708af02193cd2dec5e03ed8621c40312f9f8b841beb22c042e6" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 16:30:17 crc kubenswrapper[4713]: E0126 16:30:13.793077 4713 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 421e3ee29b170708af02193cd2dec5e03ed8621c40312f9f8b841beb22c042e6 is running failed: container process not found" containerID="421e3ee29b170708af02193cd2dec5e03ed8621c40312f9f8b841beb22c042e6" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 16:30:17 crc kubenswrapper[4713]: E0126 16:30:13.793108 4713 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 421e3ee29b170708af02193cd2dec5e03ed8621c40312f9f8b841beb22c042e6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-rtfxn" podUID="7750107e-4fca-41f0-ae40-e61d36d688a7" containerName="registry-server" Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.299688 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.410928 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crw5m\" (UniqueName: \"kubernetes.io/projected/7750107e-4fca-41f0-ae40-e61d36d688a7-kube-api-access-crw5m\") pod \"7750107e-4fca-41f0-ae40-e61d36d688a7\" (UID: \"7750107e-4fca-41f0-ae40-e61d36d688a7\") " Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.411006 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7750107e-4fca-41f0-ae40-e61d36d688a7-utilities\") pod \"7750107e-4fca-41f0-ae40-e61d36d688a7\" (UID: \"7750107e-4fca-41f0-ae40-e61d36d688a7\") " Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.411265 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7750107e-4fca-41f0-ae40-e61d36d688a7-catalog-content\") pod \"7750107e-4fca-41f0-ae40-e61d36d688a7\" (UID: \"7750107e-4fca-41f0-ae40-e61d36d688a7\") " Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.412148 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7750107e-4fca-41f0-ae40-e61d36d688a7-utilities" (OuterVolumeSpecName: "utilities") pod "7750107e-4fca-41f0-ae40-e61d36d688a7" (UID: "7750107e-4fca-41f0-ae40-e61d36d688a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.420249 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7750107e-4fca-41f0-ae40-e61d36d688a7-kube-api-access-crw5m" (OuterVolumeSpecName: "kube-api-access-crw5m") pod "7750107e-4fca-41f0-ae40-e61d36d688a7" (UID: "7750107e-4fca-41f0-ae40-e61d36d688a7"). InnerVolumeSpecName "kube-api-access-crw5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.513942 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crw5m\" (UniqueName: \"kubernetes.io/projected/7750107e-4fca-41f0-ae40-e61d36d688a7-kube-api-access-crw5m\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.514055 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7750107e-4fca-41f0-ae40-e61d36d688a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.552633 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7750107e-4fca-41f0-ae40-e61d36d688a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7750107e-4fca-41f0-ae40-e61d36d688a7" (UID: "7750107e-4fca-41f0-ae40-e61d36d688a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.592069 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtfxn" event={"ID":"7750107e-4fca-41f0-ae40-e61d36d688a7","Type":"ContainerDied","Data":"7fd6cb1b740dad3c5e1b5b4002155c8309d959292887a8621f1f531b9b703c9b"} Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.592144 4713 scope.go:117] "RemoveContainer" containerID="421e3ee29b170708af02193cd2dec5e03ed8621c40312f9f8b841beb22c042e6" Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.592160 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtfxn" Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.615667 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7750107e-4fca-41f0-ae40-e61d36d688a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.637932 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rtfxn"] Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.638626 4713 scope.go:117] "RemoveContainer" containerID="41b5bc383f08d2527dc5ab449cbffc97b0a536b69797ae89d756570792481ef9" Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.648343 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rtfxn"] Jan 26 16:30:18 crc kubenswrapper[4713]: I0126 16:30:18.676338 4713 scope.go:117] "RemoveContainer" containerID="7c561138bfdf226c40b5598a3dab40019d526fd66cd194feedeacc1fa9871731" Jan 26 16:30:19 crc kubenswrapper[4713]: I0126 16:30:19.826175 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7750107e-4fca-41f0-ae40-e61d36d688a7" path="/var/lib/kubelet/pods/7750107e-4fca-41f0-ae40-e61d36d688a7/volumes" Jan 26 16:30:21 crc kubenswrapper[4713]: I0126 16:30:21.236933 4713 scope.go:117] "RemoveContainer" containerID="bd59df16ea7405eeb2bfb5a0db37e78a81e48fbc6280327281878376128a76f0" Jan 26 16:31:33 crc kubenswrapper[4713]: I0126 16:31:33.301162 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:31:33 crc kubenswrapper[4713]: I0126 16:31:33.301685 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:32:03 crc kubenswrapper[4713]: I0126 16:32:03.301789 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:32:03 crc kubenswrapper[4713]: I0126 16:32:03.302277 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:32:23 crc kubenswrapper[4713]: I0126 16:32:23.874948 4713 generic.go:334] "Generic (PLEG): container finished" podID="b9ed8b20-616a-49b3-b0bb-ad86c228de84" containerID="3aa5efeca6cbf79cfe683bea216dee1794f29947a762975f9fc46d223099a198" exitCode=1 Jan 26 16:32:23 crc kubenswrapper[4713]: I0126 16:32:23.875024 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b9ed8b20-616a-49b3-b0bb-ad86c228de84","Type":"ContainerDied","Data":"3aa5efeca6cbf79cfe683bea216dee1794f29947a762975f9fc46d223099a198"} Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.467285 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.612287 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jkl5\" (UniqueName: \"kubernetes.io/projected/b9ed8b20-616a-49b3-b0bb-ad86c228de84-kube-api-access-9jkl5\") pod \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.612399 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9ed8b20-616a-49b3-b0bb-ad86c228de84-config-data\") pod \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.613458 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9ed8b20-616a-49b3-b0bb-ad86c228de84-config-data" (OuterVolumeSpecName: "config-data") pod "b9ed8b20-616a-49b3-b0bb-ad86c228de84" (UID: "b9ed8b20-616a-49b3-b0bb-ad86c228de84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.613512 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b9ed8b20-616a-49b3-b0bb-ad86c228de84-test-operator-ephemeral-workdir\") pod \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.613567 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-ssh-key\") pod \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.613605 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-openstack-config-secret\") pod \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.613643 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b9ed8b20-616a-49b3-b0bb-ad86c228de84-test-operator-ephemeral-temporary\") pod \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.613719 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b9ed8b20-616a-49b3-b0bb-ad86c228de84-openstack-config\") pod \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.613773 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-ca-certs\") pod \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.613819 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\" (UID: \"b9ed8b20-616a-49b3-b0bb-ad86c228de84\") " Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.614742 4713 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9ed8b20-616a-49b3-b0bb-ad86c228de84-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.619005 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ed8b20-616a-49b3-b0bb-ad86c228de84-kube-api-access-9jkl5" (OuterVolumeSpecName: "kube-api-access-9jkl5") pod "b9ed8b20-616a-49b3-b0bb-ad86c228de84" (UID: "b9ed8b20-616a-49b3-b0bb-ad86c228de84"). InnerVolumeSpecName "kube-api-access-9jkl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.619121 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "b9ed8b20-616a-49b3-b0bb-ad86c228de84" (UID: "b9ed8b20-616a-49b3-b0bb-ad86c228de84"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.619479 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ed8b20-616a-49b3-b0bb-ad86c228de84-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "b9ed8b20-616a-49b3-b0bb-ad86c228de84" (UID: "b9ed8b20-616a-49b3-b0bb-ad86c228de84"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.647045 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "b9ed8b20-616a-49b3-b0bb-ad86c228de84" (UID: "b9ed8b20-616a-49b3-b0bb-ad86c228de84"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.655720 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b9ed8b20-616a-49b3-b0bb-ad86c228de84" (UID: "b9ed8b20-616a-49b3-b0bb-ad86c228de84"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.663422 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b9ed8b20-616a-49b3-b0bb-ad86c228de84" (UID: "b9ed8b20-616a-49b3-b0bb-ad86c228de84"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.683152 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9ed8b20-616a-49b3-b0bb-ad86c228de84-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b9ed8b20-616a-49b3-b0bb-ad86c228de84" (UID: "b9ed8b20-616a-49b3-b0bb-ad86c228de84"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.717299 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jkl5\" (UniqueName: \"kubernetes.io/projected/b9ed8b20-616a-49b3-b0bb-ad86c228de84-kube-api-access-9jkl5\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.717341 4713 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.717354 4713 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.717385 4713 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b9ed8b20-616a-49b3-b0bb-ad86c228de84-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.717400 4713 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b9ed8b20-616a-49b3-b0bb-ad86c228de84-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.717414 4713 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b9ed8b20-616a-49b3-b0bb-ad86c228de84-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.717450 4713 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.743897 4713 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.825098 4713 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.906811 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b9ed8b20-616a-49b3-b0bb-ad86c228de84","Type":"ContainerDied","Data":"a32bb26731e1c6ac71792b3d08ef0e4129bedd2fdd8e440b8c37be1d0098ca1f"} Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.906870 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a32bb26731e1c6ac71792b3d08ef0e4129bedd2fdd8e440b8c37be1d0098ca1f" Jan 26 16:32:25 crc kubenswrapper[4713]: I0126 16:32:25.906983 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 16:32:26 crc kubenswrapper[4713]: I0126 16:32:26.100989 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ed8b20-616a-49b3-b0bb-ad86c228de84-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "b9ed8b20-616a-49b3-b0bb-ad86c228de84" (UID: "b9ed8b20-616a-49b3-b0bb-ad86c228de84"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:32:26 crc kubenswrapper[4713]: I0126 16:32:26.133652 4713 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b9ed8b20-616a-49b3-b0bb-ad86c228de84-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.223476 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 16:32:28 crc kubenswrapper[4713]: E0126 16:32:28.224029 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7750107e-4fca-41f0-ae40-e61d36d688a7" containerName="extract-utilities" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.224046 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="7750107e-4fca-41f0-ae40-e61d36d688a7" containerName="extract-utilities" Jan 26 16:32:28 crc kubenswrapper[4713]: E0126 16:32:28.224063 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ed8b20-616a-49b3-b0bb-ad86c228de84" containerName="tempest-tests-tempest-tests-runner" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.224072 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ed8b20-616a-49b3-b0bb-ad86c228de84" containerName="tempest-tests-tempest-tests-runner" Jan 26 16:32:28 crc kubenswrapper[4713]: E0126 16:32:28.224090 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7750107e-4fca-41f0-ae40-e61d36d688a7" containerName="extract-content" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.224099 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="7750107e-4fca-41f0-ae40-e61d36d688a7" containerName="extract-content" Jan 26 16:32:28 crc kubenswrapper[4713]: E0126 16:32:28.224114 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="246b0843-8505-46ec-9019-8b40ef113608" containerName="collect-profiles" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.224123 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="246b0843-8505-46ec-9019-8b40ef113608" containerName="collect-profiles" Jan 26 16:32:28 crc kubenswrapper[4713]: E0126 16:32:28.224166 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7750107e-4fca-41f0-ae40-e61d36d688a7" containerName="registry-server" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.224177 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="7750107e-4fca-41f0-ae40-e61d36d688a7" containerName="registry-server" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.224497 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="7750107e-4fca-41f0-ae40-e61d36d688a7" containerName="registry-server" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.224530 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="246b0843-8505-46ec-9019-8b40ef113608" containerName="collect-profiles" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.224572 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ed8b20-616a-49b3-b0bb-ad86c228de84" containerName="tempest-tests-tempest-tests-runner" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.225702 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.233069 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ld8dg" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.234835 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.379878 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvb5h\" (UniqueName: \"kubernetes.io/projected/e4e74714-800f-4449-931f-c2473dbd60d5-kube-api-access-hvb5h\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e4e74714-800f-4449-931f-c2473dbd60d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.379940 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e4e74714-800f-4449-931f-c2473dbd60d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.482895 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvb5h\" (UniqueName: \"kubernetes.io/projected/e4e74714-800f-4449-931f-c2473dbd60d5-kube-api-access-hvb5h\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e4e74714-800f-4449-931f-c2473dbd60d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.482995 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e4e74714-800f-4449-931f-c2473dbd60d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.483901 4713 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e4e74714-800f-4449-931f-c2473dbd60d5\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.505395 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvb5h\" (UniqueName: \"kubernetes.io/projected/e4e74714-800f-4449-931f-c2473dbd60d5-kube-api-access-hvb5h\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e4e74714-800f-4449-931f-c2473dbd60d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.540761 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e4e74714-800f-4449-931f-c2473dbd60d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:32:28 crc kubenswrapper[4713]: I0126 16:32:28.569588 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:32:29 crc kubenswrapper[4713]: I0126 16:32:29.052129 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 16:32:29 crc kubenswrapper[4713]: I0126 16:32:29.054188 4713 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:32:29 crc kubenswrapper[4713]: I0126 16:32:29.964847 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e4e74714-800f-4449-931f-c2473dbd60d5","Type":"ContainerStarted","Data":"96f7a974afce4854d5682c1a3c6747f409db8de23bf78468308914a0d1489c70"} Jan 26 16:32:30 crc kubenswrapper[4713]: I0126 16:32:30.979023 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e4e74714-800f-4449-931f-c2473dbd60d5","Type":"ContainerStarted","Data":"6a4eeea1482c40f57136de5d505a9a082c22c323e47eb4e252146b175e03367f"} Jan 26 16:32:30 crc kubenswrapper[4713]: I0126 16:32:30.999032 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.21216028 podStartE2EDuration="2.99901012s" podCreationTimestamp="2026-01-26 16:32:28 +0000 UTC" firstStartedPulling="2026-01-26 16:32:29.05396946 +0000 UTC m=+3524.190986695" lastFinishedPulling="2026-01-26 16:32:29.84081928 +0000 UTC m=+3524.977836535" observedRunningTime="2026-01-26 16:32:30.996247141 +0000 UTC m=+3526.133264416" watchObservedRunningTime="2026-01-26 16:32:30.99901012 +0000 UTC m=+3526.136027365" Jan 26 16:32:33 crc kubenswrapper[4713]: I0126 16:32:33.301630 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:32:33 crc kubenswrapper[4713]: I0126 16:32:33.301976 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:32:33 crc kubenswrapper[4713]: I0126 16:32:33.302052 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 16:32:33 crc kubenswrapper[4713]: I0126 16:32:33.303072 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c312980dd2a4333984ac586a57b2840623b9ab4a72d766eeb1ce1d72aca22abb"} pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:32:33 crc kubenswrapper[4713]: I0126 16:32:33.303215 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" containerID="cri-o://c312980dd2a4333984ac586a57b2840623b9ab4a72d766eeb1ce1d72aca22abb" gracePeriod=600 Jan 26 16:32:34 crc kubenswrapper[4713]: I0126 16:32:34.012259 4713 generic.go:334] "Generic (PLEG): container finished" podID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerID="c312980dd2a4333984ac586a57b2840623b9ab4a72d766eeb1ce1d72aca22abb" exitCode=0 Jan 26 16:32:34 crc kubenswrapper[4713]: I0126 16:32:34.012408 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerDied","Data":"c312980dd2a4333984ac586a57b2840623b9ab4a72d766eeb1ce1d72aca22abb"} Jan 26 16:32:34 crc kubenswrapper[4713]: I0126 16:32:34.012582 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de"} Jan 26 16:32:34 crc kubenswrapper[4713]: I0126 16:32:34.012606 4713 scope.go:117] "RemoveContainer" containerID="010ff6f3cea33e9d48bbed69d793d15a8b9424bd49380685d1c1c85f03ca754b" Jan 26 16:32:38 crc kubenswrapper[4713]: I0126 16:32:38.566699 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s7h6h"] Jan 26 16:32:38 crc kubenswrapper[4713]: I0126 16:32:38.577715 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:38 crc kubenswrapper[4713]: I0126 16:32:38.597180 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s7h6h"] Jan 26 16:32:38 crc kubenswrapper[4713]: I0126 16:32:38.712640 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db34a9dc-0c42-4092-8456-898f1d702b71-catalog-content\") pod \"community-operators-s7h6h\" (UID: \"db34a9dc-0c42-4092-8456-898f1d702b71\") " pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:38 crc kubenswrapper[4713]: I0126 16:32:38.713481 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db34a9dc-0c42-4092-8456-898f1d702b71-utilities\") pod \"community-operators-s7h6h\" (UID: \"db34a9dc-0c42-4092-8456-898f1d702b71\") " pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:38 crc kubenswrapper[4713]: I0126 16:32:38.713592 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gfws\" (UniqueName: \"kubernetes.io/projected/db34a9dc-0c42-4092-8456-898f1d702b71-kube-api-access-7gfws\") pod \"community-operators-s7h6h\" (UID: \"db34a9dc-0c42-4092-8456-898f1d702b71\") " pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:38 crc kubenswrapper[4713]: I0126 16:32:38.815236 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db34a9dc-0c42-4092-8456-898f1d702b71-utilities\") pod \"community-operators-s7h6h\" (UID: \"db34a9dc-0c42-4092-8456-898f1d702b71\") " pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:38 crc kubenswrapper[4713]: I0126 16:32:38.815349 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gfws\" (UniqueName: \"kubernetes.io/projected/db34a9dc-0c42-4092-8456-898f1d702b71-kube-api-access-7gfws\") pod \"community-operators-s7h6h\" (UID: \"db34a9dc-0c42-4092-8456-898f1d702b71\") " pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:38 crc kubenswrapper[4713]: I0126 16:32:38.815540 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db34a9dc-0c42-4092-8456-898f1d702b71-catalog-content\") pod \"community-operators-s7h6h\" (UID: \"db34a9dc-0c42-4092-8456-898f1d702b71\") " pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:38 crc kubenswrapper[4713]: I0126 16:32:38.815715 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db34a9dc-0c42-4092-8456-898f1d702b71-utilities\") pod \"community-operators-s7h6h\" (UID: \"db34a9dc-0c42-4092-8456-898f1d702b71\") " pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:38 crc kubenswrapper[4713]: I0126 16:32:38.815906 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db34a9dc-0c42-4092-8456-898f1d702b71-catalog-content\") pod \"community-operators-s7h6h\" (UID: \"db34a9dc-0c42-4092-8456-898f1d702b71\") " pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:38 crc kubenswrapper[4713]: I0126 16:32:38.837597 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gfws\" (UniqueName: \"kubernetes.io/projected/db34a9dc-0c42-4092-8456-898f1d702b71-kube-api-access-7gfws\") pod \"community-operators-s7h6h\" (UID: \"db34a9dc-0c42-4092-8456-898f1d702b71\") " pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:38 crc kubenswrapper[4713]: I0126 16:32:38.912523 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:39 crc kubenswrapper[4713]: I0126 16:32:39.476744 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s7h6h"] Jan 26 16:32:40 crc kubenswrapper[4713]: I0126 16:32:40.151566 4713 generic.go:334] "Generic (PLEG): container finished" podID="db34a9dc-0c42-4092-8456-898f1d702b71" containerID="f62fe491d36a0f5837772f8136290d99da29e5ceda50b341dda744a3b9dd8b3e" exitCode=0 Jan 26 16:32:40 crc kubenswrapper[4713]: I0126 16:32:40.151648 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7h6h" event={"ID":"db34a9dc-0c42-4092-8456-898f1d702b71","Type":"ContainerDied","Data":"f62fe491d36a0f5837772f8136290d99da29e5ceda50b341dda744a3b9dd8b3e"} Jan 26 16:32:40 crc kubenswrapper[4713]: I0126 16:32:40.151823 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7h6h" event={"ID":"db34a9dc-0c42-4092-8456-898f1d702b71","Type":"ContainerStarted","Data":"e4259fbd1f00f958643accd054d1ebd6ae1ba628183282b6eeef1a002319f489"} Jan 26 16:32:42 crc kubenswrapper[4713]: I0126 16:32:42.187969 4713 generic.go:334] "Generic (PLEG): container finished" podID="db34a9dc-0c42-4092-8456-898f1d702b71" containerID="ea0dea9dbb3f16239666241997bd939b284a8e7807b22a9a619ca7af6e358ca3" exitCode=0 Jan 26 16:32:42 crc kubenswrapper[4713]: I0126 16:32:42.188075 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7h6h" event={"ID":"db34a9dc-0c42-4092-8456-898f1d702b71","Type":"ContainerDied","Data":"ea0dea9dbb3f16239666241997bd939b284a8e7807b22a9a619ca7af6e358ca3"} Jan 26 16:32:43 crc kubenswrapper[4713]: I0126 16:32:43.200466 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7h6h" event={"ID":"db34a9dc-0c42-4092-8456-898f1d702b71","Type":"ContainerStarted","Data":"39b46c75df3a3a31e8b82f5afd36bd114bd6213adce41133186f7a22f1af1012"} Jan 26 16:32:43 crc kubenswrapper[4713]: I0126 16:32:43.235826 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s7h6h" podStartSLOduration=2.734848541 podStartE2EDuration="5.235804898s" podCreationTimestamp="2026-01-26 16:32:38 +0000 UTC" firstStartedPulling="2026-01-26 16:32:40.154173371 +0000 UTC m=+3535.291190606" lastFinishedPulling="2026-01-26 16:32:42.655129728 +0000 UTC m=+3537.792146963" observedRunningTime="2026-01-26 16:32:43.233159443 +0000 UTC m=+3538.370176678" watchObservedRunningTime="2026-01-26 16:32:43.235804898 +0000 UTC m=+3538.372822153" Jan 26 16:32:48 crc kubenswrapper[4713]: I0126 16:32:48.913604 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:48 crc kubenswrapper[4713]: I0126 16:32:48.914171 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:48 crc kubenswrapper[4713]: I0126 16:32:48.982292 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:49 crc kubenswrapper[4713]: I0126 16:32:49.322826 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:49 crc kubenswrapper[4713]: I0126 16:32:49.377043 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s7h6h"] Jan 26 16:32:51 crc kubenswrapper[4713]: I0126 16:32:51.291731 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s7h6h" podUID="db34a9dc-0c42-4092-8456-898f1d702b71" containerName="registry-server" containerID="cri-o://39b46c75df3a3a31e8b82f5afd36bd114bd6213adce41133186f7a22f1af1012" gracePeriod=2 Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.118473 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.229529 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db34a9dc-0c42-4092-8456-898f1d702b71-utilities\") pod \"db34a9dc-0c42-4092-8456-898f1d702b71\" (UID: \"db34a9dc-0c42-4092-8456-898f1d702b71\") " Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.229839 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db34a9dc-0c42-4092-8456-898f1d702b71-catalog-content\") pod \"db34a9dc-0c42-4092-8456-898f1d702b71\" (UID: \"db34a9dc-0c42-4092-8456-898f1d702b71\") " Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.229948 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gfws\" (UniqueName: \"kubernetes.io/projected/db34a9dc-0c42-4092-8456-898f1d702b71-kube-api-access-7gfws\") pod \"db34a9dc-0c42-4092-8456-898f1d702b71\" (UID: \"db34a9dc-0c42-4092-8456-898f1d702b71\") " Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.230814 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db34a9dc-0c42-4092-8456-898f1d702b71-utilities" (OuterVolumeSpecName: "utilities") pod "db34a9dc-0c42-4092-8456-898f1d702b71" (UID: "db34a9dc-0c42-4092-8456-898f1d702b71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.231223 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db34a9dc-0c42-4092-8456-898f1d702b71-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.252671 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db34a9dc-0c42-4092-8456-898f1d702b71-kube-api-access-7gfws" (OuterVolumeSpecName: "kube-api-access-7gfws") pod "db34a9dc-0c42-4092-8456-898f1d702b71" (UID: "db34a9dc-0c42-4092-8456-898f1d702b71"). InnerVolumeSpecName "kube-api-access-7gfws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.293454 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db34a9dc-0c42-4092-8456-898f1d702b71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db34a9dc-0c42-4092-8456-898f1d702b71" (UID: "db34a9dc-0c42-4092-8456-898f1d702b71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.303271 4713 generic.go:334] "Generic (PLEG): container finished" podID="db34a9dc-0c42-4092-8456-898f1d702b71" containerID="39b46c75df3a3a31e8b82f5afd36bd114bd6213adce41133186f7a22f1af1012" exitCode=0 Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.303314 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7h6h" event={"ID":"db34a9dc-0c42-4092-8456-898f1d702b71","Type":"ContainerDied","Data":"39b46c75df3a3a31e8b82f5afd36bd114bd6213adce41133186f7a22f1af1012"} Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.303341 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7h6h" event={"ID":"db34a9dc-0c42-4092-8456-898f1d702b71","Type":"ContainerDied","Data":"e4259fbd1f00f958643accd054d1ebd6ae1ba628183282b6eeef1a002319f489"} Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.303377 4713 scope.go:117] "RemoveContainer" containerID="39b46c75df3a3a31e8b82f5afd36bd114bd6213adce41133186f7a22f1af1012" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.303510 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7h6h" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.333682 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db34a9dc-0c42-4092-8456-898f1d702b71-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.334126 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gfws\" (UniqueName: \"kubernetes.io/projected/db34a9dc-0c42-4092-8456-898f1d702b71-kube-api-access-7gfws\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.341941 4713 scope.go:117] "RemoveContainer" containerID="ea0dea9dbb3f16239666241997bd939b284a8e7807b22a9a619ca7af6e358ca3" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.363737 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s7h6h"] Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.391953 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s7h6h"] Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.394546 4713 scope.go:117] "RemoveContainer" containerID="f62fe491d36a0f5837772f8136290d99da29e5ceda50b341dda744a3b9dd8b3e" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.435442 4713 scope.go:117] "RemoveContainer" containerID="39b46c75df3a3a31e8b82f5afd36bd114bd6213adce41133186f7a22f1af1012" Jan 26 16:32:52 crc kubenswrapper[4713]: E0126 16:32:52.435768 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39b46c75df3a3a31e8b82f5afd36bd114bd6213adce41133186f7a22f1af1012\": container with ID starting with 39b46c75df3a3a31e8b82f5afd36bd114bd6213adce41133186f7a22f1af1012 not found: ID does not exist" containerID="39b46c75df3a3a31e8b82f5afd36bd114bd6213adce41133186f7a22f1af1012" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.435799 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39b46c75df3a3a31e8b82f5afd36bd114bd6213adce41133186f7a22f1af1012"} err="failed to get container status \"39b46c75df3a3a31e8b82f5afd36bd114bd6213adce41133186f7a22f1af1012\": rpc error: code = NotFound desc = could not find container \"39b46c75df3a3a31e8b82f5afd36bd114bd6213adce41133186f7a22f1af1012\": container with ID starting with 39b46c75df3a3a31e8b82f5afd36bd114bd6213adce41133186f7a22f1af1012 not found: ID does not exist" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.435819 4713 scope.go:117] "RemoveContainer" containerID="ea0dea9dbb3f16239666241997bd939b284a8e7807b22a9a619ca7af6e358ca3" Jan 26 16:32:52 crc kubenswrapper[4713]: E0126 16:32:52.436004 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea0dea9dbb3f16239666241997bd939b284a8e7807b22a9a619ca7af6e358ca3\": container with ID starting with ea0dea9dbb3f16239666241997bd939b284a8e7807b22a9a619ca7af6e358ca3 not found: ID does not exist" containerID="ea0dea9dbb3f16239666241997bd939b284a8e7807b22a9a619ca7af6e358ca3" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.436024 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea0dea9dbb3f16239666241997bd939b284a8e7807b22a9a619ca7af6e358ca3"} err="failed to get container status \"ea0dea9dbb3f16239666241997bd939b284a8e7807b22a9a619ca7af6e358ca3\": rpc error: code = NotFound desc = could not find container \"ea0dea9dbb3f16239666241997bd939b284a8e7807b22a9a619ca7af6e358ca3\": container with ID starting with ea0dea9dbb3f16239666241997bd939b284a8e7807b22a9a619ca7af6e358ca3 not found: ID does not exist" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.436037 4713 scope.go:117] "RemoveContainer" containerID="f62fe491d36a0f5837772f8136290d99da29e5ceda50b341dda744a3b9dd8b3e" Jan 26 16:32:52 crc kubenswrapper[4713]: E0126 16:32:52.438337 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f62fe491d36a0f5837772f8136290d99da29e5ceda50b341dda744a3b9dd8b3e\": container with ID starting with f62fe491d36a0f5837772f8136290d99da29e5ceda50b341dda744a3b9dd8b3e not found: ID does not exist" containerID="f62fe491d36a0f5837772f8136290d99da29e5ceda50b341dda744a3b9dd8b3e" Jan 26 16:32:52 crc kubenswrapper[4713]: I0126 16:32:52.438402 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f62fe491d36a0f5837772f8136290d99da29e5ceda50b341dda744a3b9dd8b3e"} err="failed to get container status \"f62fe491d36a0f5837772f8136290d99da29e5ceda50b341dda744a3b9dd8b3e\": rpc error: code = NotFound desc = could not find container \"f62fe491d36a0f5837772f8136290d99da29e5ceda50b341dda744a3b9dd8b3e\": container with ID starting with f62fe491d36a0f5837772f8136290d99da29e5ceda50b341dda744a3b9dd8b3e not found: ID does not exist" Jan 26 16:32:53 crc kubenswrapper[4713]: I0126 16:32:53.816298 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db34a9dc-0c42-4092-8456-898f1d702b71" path="/var/lib/kubelet/pods/db34a9dc-0c42-4092-8456-898f1d702b71/volumes" Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.560200 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-plrnd/must-gather-6jmrh"] Jan 26 16:32:58 crc kubenswrapper[4713]: E0126 16:32:58.561178 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db34a9dc-0c42-4092-8456-898f1d702b71" containerName="registry-server" Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.561195 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="db34a9dc-0c42-4092-8456-898f1d702b71" containerName="registry-server" Jan 26 16:32:58 crc kubenswrapper[4713]: E0126 16:32:58.561219 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db34a9dc-0c42-4092-8456-898f1d702b71" containerName="extract-content" Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.561228 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="db34a9dc-0c42-4092-8456-898f1d702b71" containerName="extract-content" Jan 26 16:32:58 crc kubenswrapper[4713]: E0126 16:32:58.561254 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db34a9dc-0c42-4092-8456-898f1d702b71" containerName="extract-utilities" Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.561264 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="db34a9dc-0c42-4092-8456-898f1d702b71" containerName="extract-utilities" Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.561530 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="db34a9dc-0c42-4092-8456-898f1d702b71" containerName="registry-server" Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.562882 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/must-gather-6jmrh" Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.566503 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-plrnd"/"openshift-service-ca.crt" Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.566809 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-plrnd"/"kube-root-ca.crt" Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.568546 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66mkr\" (UniqueName: \"kubernetes.io/projected/0d6370a0-f234-4f00-a9da-f166704c4278-kube-api-access-66mkr\") pod \"must-gather-6jmrh\" (UID: \"0d6370a0-f234-4f00-a9da-f166704c4278\") " pod="openshift-must-gather-plrnd/must-gather-6jmrh" Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.568732 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d6370a0-f234-4f00-a9da-f166704c4278-must-gather-output\") pod \"must-gather-6jmrh\" (UID: \"0d6370a0-f234-4f00-a9da-f166704c4278\") " pod="openshift-must-gather-plrnd/must-gather-6jmrh" Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.605093 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-plrnd/must-gather-6jmrh"] Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.671594 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66mkr\" (UniqueName: \"kubernetes.io/projected/0d6370a0-f234-4f00-a9da-f166704c4278-kube-api-access-66mkr\") pod \"must-gather-6jmrh\" (UID: \"0d6370a0-f234-4f00-a9da-f166704c4278\") " pod="openshift-must-gather-plrnd/must-gather-6jmrh" Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.671780 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d6370a0-f234-4f00-a9da-f166704c4278-must-gather-output\") pod \"must-gather-6jmrh\" (UID: \"0d6370a0-f234-4f00-a9da-f166704c4278\") " pod="openshift-must-gather-plrnd/must-gather-6jmrh" Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.672250 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d6370a0-f234-4f00-a9da-f166704c4278-must-gather-output\") pod \"must-gather-6jmrh\" (UID: \"0d6370a0-f234-4f00-a9da-f166704c4278\") " pod="openshift-must-gather-plrnd/must-gather-6jmrh" Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.696964 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66mkr\" (UniqueName: \"kubernetes.io/projected/0d6370a0-f234-4f00-a9da-f166704c4278-kube-api-access-66mkr\") pod \"must-gather-6jmrh\" (UID: \"0d6370a0-f234-4f00-a9da-f166704c4278\") " pod="openshift-must-gather-plrnd/must-gather-6jmrh" Jan 26 16:32:58 crc kubenswrapper[4713]: I0126 16:32:58.889917 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/must-gather-6jmrh" Jan 26 16:32:59 crc kubenswrapper[4713]: I0126 16:32:59.536480 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-plrnd/must-gather-6jmrh"] Jan 26 16:33:00 crc kubenswrapper[4713]: I0126 16:33:00.411249 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-plrnd/must-gather-6jmrh" event={"ID":"0d6370a0-f234-4f00-a9da-f166704c4278","Type":"ContainerStarted","Data":"99b1ae3a0d153954bcd19aba2063a3f0ae29173fb65a04bde84c272bf174d8fa"} Jan 26 16:33:11 crc kubenswrapper[4713]: I0126 16:33:11.522995 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-plrnd/must-gather-6jmrh" event={"ID":"0d6370a0-f234-4f00-a9da-f166704c4278","Type":"ContainerStarted","Data":"5a3b743afcb05eafa811e22722d7d7e3a73f8815f4f947e88f916d679be29e65"} Jan 26 16:33:11 crc kubenswrapper[4713]: I0126 16:33:11.523484 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-plrnd/must-gather-6jmrh" event={"ID":"0d6370a0-f234-4f00-a9da-f166704c4278","Type":"ContainerStarted","Data":"3372a754ed90a6cde6b31e51fb834cd5ab29815ba53de9636fd291208780833e"} Jan 26 16:33:11 crc kubenswrapper[4713]: I0126 16:33:11.548785 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-plrnd/must-gather-6jmrh" podStartSLOduration=2.8694468730000002 podStartE2EDuration="13.548766105s" podCreationTimestamp="2026-01-26 16:32:58 +0000 UTC" firstStartedPulling="2026-01-26 16:32:59.543952209 +0000 UTC m=+3554.680969444" lastFinishedPulling="2026-01-26 16:33:10.223271441 +0000 UTC m=+3565.360288676" observedRunningTime="2026-01-26 16:33:11.541599641 +0000 UTC m=+3566.678616876" watchObservedRunningTime="2026-01-26 16:33:11.548766105 +0000 UTC m=+3566.685783340" Jan 26 16:33:14 crc kubenswrapper[4713]: I0126 16:33:14.746304 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-plrnd/crc-debug-g9s9v"] Jan 26 16:33:14 crc kubenswrapper[4713]: I0126 16:33:14.748452 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/crc-debug-g9s9v" Jan 26 16:33:14 crc kubenswrapper[4713]: I0126 16:33:14.753144 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-plrnd"/"default-dockercfg-jvj9h" Jan 26 16:33:14 crc kubenswrapper[4713]: I0126 16:33:14.891589 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wbfk\" (UniqueName: \"kubernetes.io/projected/ab2de51d-7736-4bab-aae6-649cca887fbc-kube-api-access-2wbfk\") pod \"crc-debug-g9s9v\" (UID: \"ab2de51d-7736-4bab-aae6-649cca887fbc\") " pod="openshift-must-gather-plrnd/crc-debug-g9s9v" Jan 26 16:33:14 crc kubenswrapper[4713]: I0126 16:33:14.891777 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab2de51d-7736-4bab-aae6-649cca887fbc-host\") pod \"crc-debug-g9s9v\" (UID: \"ab2de51d-7736-4bab-aae6-649cca887fbc\") " pod="openshift-must-gather-plrnd/crc-debug-g9s9v" Jan 26 16:33:14 crc kubenswrapper[4713]: I0126 16:33:14.993243 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wbfk\" (UniqueName: \"kubernetes.io/projected/ab2de51d-7736-4bab-aae6-649cca887fbc-kube-api-access-2wbfk\") pod \"crc-debug-g9s9v\" (UID: \"ab2de51d-7736-4bab-aae6-649cca887fbc\") " pod="openshift-must-gather-plrnd/crc-debug-g9s9v" Jan 26 16:33:14 crc kubenswrapper[4713]: I0126 16:33:14.993690 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab2de51d-7736-4bab-aae6-649cca887fbc-host\") pod \"crc-debug-g9s9v\" (UID: \"ab2de51d-7736-4bab-aae6-649cca887fbc\") " pod="openshift-must-gather-plrnd/crc-debug-g9s9v" Jan 26 16:33:14 crc kubenswrapper[4713]: I0126 16:33:14.993794 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab2de51d-7736-4bab-aae6-649cca887fbc-host\") pod \"crc-debug-g9s9v\" (UID: \"ab2de51d-7736-4bab-aae6-649cca887fbc\") " pod="openshift-must-gather-plrnd/crc-debug-g9s9v" Jan 26 16:33:15 crc kubenswrapper[4713]: I0126 16:33:15.012341 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wbfk\" (UniqueName: \"kubernetes.io/projected/ab2de51d-7736-4bab-aae6-649cca887fbc-kube-api-access-2wbfk\") pod \"crc-debug-g9s9v\" (UID: \"ab2de51d-7736-4bab-aae6-649cca887fbc\") " pod="openshift-must-gather-plrnd/crc-debug-g9s9v" Jan 26 16:33:15 crc kubenswrapper[4713]: I0126 16:33:15.065998 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/crc-debug-g9s9v" Jan 26 16:33:15 crc kubenswrapper[4713]: I0126 16:33:15.572742 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-plrnd/crc-debug-g9s9v" event={"ID":"ab2de51d-7736-4bab-aae6-649cca887fbc","Type":"ContainerStarted","Data":"62ac05fc0b1abb58e2a0cd08a56ed6352a624f283313001e7ab19e5462edbe18"} Jan 26 16:33:31 crc kubenswrapper[4713]: E0126 16:33:31.328237 4713 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Jan 26 16:33:31 crc kubenswrapper[4713]: E0126 16:33:31.328873 4713 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2wbfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-g9s9v_openshift-must-gather-plrnd(ab2de51d-7736-4bab-aae6-649cca887fbc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:33:31 crc kubenswrapper[4713]: E0126 16:33:31.330229 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-plrnd/crc-debug-g9s9v" podUID="ab2de51d-7736-4bab-aae6-649cca887fbc" Jan 26 16:33:31 crc kubenswrapper[4713]: E0126 16:33:31.728506 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-plrnd/crc-debug-g9s9v" podUID="ab2de51d-7736-4bab-aae6-649cca887fbc" Jan 26 16:33:43 crc kubenswrapper[4713]: I0126 16:33:43.835786 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-plrnd/crc-debug-g9s9v" event={"ID":"ab2de51d-7736-4bab-aae6-649cca887fbc","Type":"ContainerStarted","Data":"e9006e879785174718b3f08a77106412d945ca79c3a5ac804efe84d067c18bad"} Jan 26 16:33:43 crc kubenswrapper[4713]: I0126 16:33:43.860793 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-plrnd/crc-debug-g9s9v" podStartSLOduration=1.707423197 podStartE2EDuration="29.860772788s" podCreationTimestamp="2026-01-26 16:33:14 +0000 UTC" firstStartedPulling="2026-01-26 16:33:15.097229787 +0000 UTC m=+3570.234247022" lastFinishedPulling="2026-01-26 16:33:43.250579388 +0000 UTC m=+3598.387596613" observedRunningTime="2026-01-26 16:33:43.848825998 +0000 UTC m=+3598.985843233" watchObservedRunningTime="2026-01-26 16:33:43.860772788 +0000 UTC m=+3598.997790023" Jan 26 16:34:33 crc kubenswrapper[4713]: I0126 16:34:33.301215 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:34:33 crc kubenswrapper[4713]: I0126 16:34:33.301807 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:34:40 crc kubenswrapper[4713]: I0126 16:34:40.384638 4713 generic.go:334] "Generic (PLEG): container finished" podID="ab2de51d-7736-4bab-aae6-649cca887fbc" containerID="e9006e879785174718b3f08a77106412d945ca79c3a5ac804efe84d067c18bad" exitCode=0 Jan 26 16:34:40 crc kubenswrapper[4713]: I0126 16:34:40.384758 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-plrnd/crc-debug-g9s9v" event={"ID":"ab2de51d-7736-4bab-aae6-649cca887fbc","Type":"ContainerDied","Data":"e9006e879785174718b3f08a77106412d945ca79c3a5ac804efe84d067c18bad"} Jan 26 16:34:41 crc kubenswrapper[4713]: I0126 16:34:41.532962 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/crc-debug-g9s9v" Jan 26 16:34:41 crc kubenswrapper[4713]: I0126 16:34:41.587659 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-plrnd/crc-debug-g9s9v"] Jan 26 16:34:41 crc kubenswrapper[4713]: I0126 16:34:41.596542 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab2de51d-7736-4bab-aae6-649cca887fbc-host\") pod \"ab2de51d-7736-4bab-aae6-649cca887fbc\" (UID: \"ab2de51d-7736-4bab-aae6-649cca887fbc\") " Jan 26 16:34:41 crc kubenswrapper[4713]: I0126 16:34:41.596651 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wbfk\" (UniqueName: \"kubernetes.io/projected/ab2de51d-7736-4bab-aae6-649cca887fbc-kube-api-access-2wbfk\") pod \"ab2de51d-7736-4bab-aae6-649cca887fbc\" (UID: \"ab2de51d-7736-4bab-aae6-649cca887fbc\") " Jan 26 16:34:41 crc kubenswrapper[4713]: I0126 16:34:41.598038 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab2de51d-7736-4bab-aae6-649cca887fbc-host" (OuterVolumeSpecName: "host") pod "ab2de51d-7736-4bab-aae6-649cca887fbc" (UID: "ab2de51d-7736-4bab-aae6-649cca887fbc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:34:41 crc kubenswrapper[4713]: I0126 16:34:41.600921 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-plrnd/crc-debug-g9s9v"] Jan 26 16:34:41 crc kubenswrapper[4713]: I0126 16:34:41.605872 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2de51d-7736-4bab-aae6-649cca887fbc-kube-api-access-2wbfk" (OuterVolumeSpecName: "kube-api-access-2wbfk") pod "ab2de51d-7736-4bab-aae6-649cca887fbc" (UID: "ab2de51d-7736-4bab-aae6-649cca887fbc"). InnerVolumeSpecName "kube-api-access-2wbfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:34:41 crc kubenswrapper[4713]: I0126 16:34:41.700023 4713 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab2de51d-7736-4bab-aae6-649cca887fbc-host\") on node \"crc\" DevicePath \"\"" Jan 26 16:34:41 crc kubenswrapper[4713]: I0126 16:34:41.700072 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wbfk\" (UniqueName: \"kubernetes.io/projected/ab2de51d-7736-4bab-aae6-649cca887fbc-kube-api-access-2wbfk\") on node \"crc\" DevicePath \"\"" Jan 26 16:34:41 crc kubenswrapper[4713]: I0126 16:34:41.820076 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2de51d-7736-4bab-aae6-649cca887fbc" path="/var/lib/kubelet/pods/ab2de51d-7736-4bab-aae6-649cca887fbc/volumes" Jan 26 16:34:42 crc kubenswrapper[4713]: I0126 16:34:42.406508 4713 scope.go:117] "RemoveContainer" containerID="e9006e879785174718b3f08a77106412d945ca79c3a5ac804efe84d067c18bad" Jan 26 16:34:42 crc kubenswrapper[4713]: I0126 16:34:42.406562 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/crc-debug-g9s9v" Jan 26 16:34:42 crc kubenswrapper[4713]: I0126 16:34:42.750283 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-plrnd/crc-debug-txqjm"] Jan 26 16:34:42 crc kubenswrapper[4713]: E0126 16:34:42.750980 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2de51d-7736-4bab-aae6-649cca887fbc" containerName="container-00" Jan 26 16:34:42 crc kubenswrapper[4713]: I0126 16:34:42.751004 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2de51d-7736-4bab-aae6-649cca887fbc" containerName="container-00" Jan 26 16:34:42 crc kubenswrapper[4713]: I0126 16:34:42.751333 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2de51d-7736-4bab-aae6-649cca887fbc" containerName="container-00" Jan 26 16:34:42 crc kubenswrapper[4713]: I0126 16:34:42.752272 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/crc-debug-txqjm" Jan 26 16:34:42 crc kubenswrapper[4713]: I0126 16:34:42.754497 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-plrnd"/"default-dockercfg-jvj9h" Jan 26 16:34:42 crc kubenswrapper[4713]: I0126 16:34:42.822397 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrn4t\" (UniqueName: \"kubernetes.io/projected/8999c30c-b0b1-40ea-bffd-147fd47ce968-kube-api-access-hrn4t\") pod \"crc-debug-txqjm\" (UID: \"8999c30c-b0b1-40ea-bffd-147fd47ce968\") " pod="openshift-must-gather-plrnd/crc-debug-txqjm" Jan 26 16:34:42 crc kubenswrapper[4713]: I0126 16:34:42.822487 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8999c30c-b0b1-40ea-bffd-147fd47ce968-host\") pod \"crc-debug-txqjm\" (UID: \"8999c30c-b0b1-40ea-bffd-147fd47ce968\") " pod="openshift-must-gather-plrnd/crc-debug-txqjm" Jan 26 16:34:42 crc kubenswrapper[4713]: I0126 16:34:42.924544 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrn4t\" (UniqueName: \"kubernetes.io/projected/8999c30c-b0b1-40ea-bffd-147fd47ce968-kube-api-access-hrn4t\") pod \"crc-debug-txqjm\" (UID: \"8999c30c-b0b1-40ea-bffd-147fd47ce968\") " pod="openshift-must-gather-plrnd/crc-debug-txqjm" Jan 26 16:34:42 crc kubenswrapper[4713]: I0126 16:34:42.924620 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8999c30c-b0b1-40ea-bffd-147fd47ce968-host\") pod \"crc-debug-txqjm\" (UID: \"8999c30c-b0b1-40ea-bffd-147fd47ce968\") " pod="openshift-must-gather-plrnd/crc-debug-txqjm" Jan 26 16:34:42 crc kubenswrapper[4713]: I0126 16:34:42.925131 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8999c30c-b0b1-40ea-bffd-147fd47ce968-host\") pod \"crc-debug-txqjm\" (UID: \"8999c30c-b0b1-40ea-bffd-147fd47ce968\") " pod="openshift-must-gather-plrnd/crc-debug-txqjm" Jan 26 16:34:42 crc kubenswrapper[4713]: I0126 16:34:42.950437 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrn4t\" (UniqueName: \"kubernetes.io/projected/8999c30c-b0b1-40ea-bffd-147fd47ce968-kube-api-access-hrn4t\") pod \"crc-debug-txqjm\" (UID: \"8999c30c-b0b1-40ea-bffd-147fd47ce968\") " pod="openshift-must-gather-plrnd/crc-debug-txqjm" Jan 26 16:34:43 crc kubenswrapper[4713]: I0126 16:34:43.075405 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/crc-debug-txqjm" Jan 26 16:34:43 crc kubenswrapper[4713]: I0126 16:34:43.416760 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-plrnd/crc-debug-txqjm" event={"ID":"8999c30c-b0b1-40ea-bffd-147fd47ce968","Type":"ContainerStarted","Data":"516499e34ccee62ddaf51404d6fdfe1a0bdaf1ef9c4111be04f38e09e603e878"} Jan 26 16:34:43 crc kubenswrapper[4713]: I0126 16:34:43.417253 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-plrnd/crc-debug-txqjm" event={"ID":"8999c30c-b0b1-40ea-bffd-147fd47ce968","Type":"ContainerStarted","Data":"a6330f7051774de14b919704f466dba22d59c248d29d8f230ac13f74ede8c44c"} Jan 26 16:34:43 crc kubenswrapper[4713]: I0126 16:34:43.429073 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-plrnd/crc-debug-txqjm" podStartSLOduration=1.429053945 podStartE2EDuration="1.429053945s" podCreationTimestamp="2026-01-26 16:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:34:43.428573132 +0000 UTC m=+3658.565590357" watchObservedRunningTime="2026-01-26 16:34:43.429053945 +0000 UTC m=+3658.566071180" Jan 26 16:34:44 crc kubenswrapper[4713]: I0126 16:34:44.442169 4713 generic.go:334] "Generic (PLEG): container finished" podID="8999c30c-b0b1-40ea-bffd-147fd47ce968" containerID="516499e34ccee62ddaf51404d6fdfe1a0bdaf1ef9c4111be04f38e09e603e878" exitCode=0 Jan 26 16:34:44 crc kubenswrapper[4713]: I0126 16:34:44.442652 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-plrnd/crc-debug-txqjm" event={"ID":"8999c30c-b0b1-40ea-bffd-147fd47ce968","Type":"ContainerDied","Data":"516499e34ccee62ddaf51404d6fdfe1a0bdaf1ef9c4111be04f38e09e603e878"} Jan 26 16:34:45 crc kubenswrapper[4713]: I0126 16:34:45.560850 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/crc-debug-txqjm" Jan 26 16:34:45 crc kubenswrapper[4713]: I0126 16:34:45.601060 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-plrnd/crc-debug-txqjm"] Jan 26 16:34:45 crc kubenswrapper[4713]: I0126 16:34:45.612886 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-plrnd/crc-debug-txqjm"] Jan 26 16:34:45 crc kubenswrapper[4713]: I0126 16:34:45.677757 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8999c30c-b0b1-40ea-bffd-147fd47ce968-host\") pod \"8999c30c-b0b1-40ea-bffd-147fd47ce968\" (UID: \"8999c30c-b0b1-40ea-bffd-147fd47ce968\") " Jan 26 16:34:45 crc kubenswrapper[4713]: I0126 16:34:45.677908 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8999c30c-b0b1-40ea-bffd-147fd47ce968-host" (OuterVolumeSpecName: "host") pod "8999c30c-b0b1-40ea-bffd-147fd47ce968" (UID: "8999c30c-b0b1-40ea-bffd-147fd47ce968"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:34:45 crc kubenswrapper[4713]: I0126 16:34:45.678334 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrn4t\" (UniqueName: \"kubernetes.io/projected/8999c30c-b0b1-40ea-bffd-147fd47ce968-kube-api-access-hrn4t\") pod \"8999c30c-b0b1-40ea-bffd-147fd47ce968\" (UID: \"8999c30c-b0b1-40ea-bffd-147fd47ce968\") " Jan 26 16:34:45 crc kubenswrapper[4713]: I0126 16:34:45.678997 4713 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8999c30c-b0b1-40ea-bffd-147fd47ce968-host\") on node \"crc\" DevicePath \"\"" Jan 26 16:34:45 crc kubenswrapper[4713]: I0126 16:34:45.687837 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8999c30c-b0b1-40ea-bffd-147fd47ce968-kube-api-access-hrn4t" (OuterVolumeSpecName: "kube-api-access-hrn4t") pod "8999c30c-b0b1-40ea-bffd-147fd47ce968" (UID: "8999c30c-b0b1-40ea-bffd-147fd47ce968"). InnerVolumeSpecName "kube-api-access-hrn4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:34:45 crc kubenswrapper[4713]: I0126 16:34:45.780860 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrn4t\" (UniqueName: \"kubernetes.io/projected/8999c30c-b0b1-40ea-bffd-147fd47ce968-kube-api-access-hrn4t\") on node \"crc\" DevicePath \"\"" Jan 26 16:34:45 crc kubenswrapper[4713]: I0126 16:34:45.819480 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8999c30c-b0b1-40ea-bffd-147fd47ce968" path="/var/lib/kubelet/pods/8999c30c-b0b1-40ea-bffd-147fd47ce968/volumes" Jan 26 16:34:46 crc kubenswrapper[4713]: I0126 16:34:46.463196 4713 scope.go:117] "RemoveContainer" containerID="516499e34ccee62ddaf51404d6fdfe1a0bdaf1ef9c4111be04f38e09e603e878" Jan 26 16:34:46 crc kubenswrapper[4713]: I0126 16:34:46.463227 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/crc-debug-txqjm" Jan 26 16:34:46 crc kubenswrapper[4713]: I0126 16:34:46.814835 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-plrnd/crc-debug-hfzzs"] Jan 26 16:34:46 crc kubenswrapper[4713]: E0126 16:34:46.816877 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8999c30c-b0b1-40ea-bffd-147fd47ce968" containerName="container-00" Jan 26 16:34:46 crc kubenswrapper[4713]: I0126 16:34:46.817016 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="8999c30c-b0b1-40ea-bffd-147fd47ce968" containerName="container-00" Jan 26 16:34:46 crc kubenswrapper[4713]: I0126 16:34:46.817411 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="8999c30c-b0b1-40ea-bffd-147fd47ce968" containerName="container-00" Jan 26 16:34:46 crc kubenswrapper[4713]: I0126 16:34:46.818471 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/crc-debug-hfzzs" Jan 26 16:34:46 crc kubenswrapper[4713]: I0126 16:34:46.821619 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-plrnd"/"default-dockercfg-jvj9h" Jan 26 16:34:46 crc kubenswrapper[4713]: I0126 16:34:46.908305 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9117d8e-4e32-4838-bf73-54ca9f4ff7df-host\") pod \"crc-debug-hfzzs\" (UID: \"f9117d8e-4e32-4838-bf73-54ca9f4ff7df\") " pod="openshift-must-gather-plrnd/crc-debug-hfzzs" Jan 26 16:34:46 crc kubenswrapper[4713]: I0126 16:34:46.908403 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w42k8\" (UniqueName: \"kubernetes.io/projected/f9117d8e-4e32-4838-bf73-54ca9f4ff7df-kube-api-access-w42k8\") pod \"crc-debug-hfzzs\" (UID: \"f9117d8e-4e32-4838-bf73-54ca9f4ff7df\") " pod="openshift-must-gather-plrnd/crc-debug-hfzzs" Jan 26 16:34:47 crc kubenswrapper[4713]: I0126 16:34:47.010190 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9117d8e-4e32-4838-bf73-54ca9f4ff7df-host\") pod \"crc-debug-hfzzs\" (UID: \"f9117d8e-4e32-4838-bf73-54ca9f4ff7df\") " pod="openshift-must-gather-plrnd/crc-debug-hfzzs" Jan 26 16:34:47 crc kubenswrapper[4713]: I0126 16:34:47.010251 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w42k8\" (UniqueName: \"kubernetes.io/projected/f9117d8e-4e32-4838-bf73-54ca9f4ff7df-kube-api-access-w42k8\") pod \"crc-debug-hfzzs\" (UID: \"f9117d8e-4e32-4838-bf73-54ca9f4ff7df\") " pod="openshift-must-gather-plrnd/crc-debug-hfzzs" Jan 26 16:34:47 crc kubenswrapper[4713]: I0126 16:34:47.010319 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9117d8e-4e32-4838-bf73-54ca9f4ff7df-host\") pod \"crc-debug-hfzzs\" (UID: \"f9117d8e-4e32-4838-bf73-54ca9f4ff7df\") " pod="openshift-must-gather-plrnd/crc-debug-hfzzs" Jan 26 16:34:47 crc kubenswrapper[4713]: I0126 16:34:47.027961 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w42k8\" (UniqueName: \"kubernetes.io/projected/f9117d8e-4e32-4838-bf73-54ca9f4ff7df-kube-api-access-w42k8\") pod \"crc-debug-hfzzs\" (UID: \"f9117d8e-4e32-4838-bf73-54ca9f4ff7df\") " pod="openshift-must-gather-plrnd/crc-debug-hfzzs" Jan 26 16:34:47 crc kubenswrapper[4713]: I0126 16:34:47.135891 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/crc-debug-hfzzs" Jan 26 16:34:47 crc kubenswrapper[4713]: I0126 16:34:47.477062 4713 generic.go:334] "Generic (PLEG): container finished" podID="f9117d8e-4e32-4838-bf73-54ca9f4ff7df" containerID="f3c8a49eaf46bdaa7bc7c29067110ca78f5c34ce85f916019bad71409c21b6f9" exitCode=0 Jan 26 16:34:47 crc kubenswrapper[4713]: I0126 16:34:47.477226 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-plrnd/crc-debug-hfzzs" event={"ID":"f9117d8e-4e32-4838-bf73-54ca9f4ff7df","Type":"ContainerDied","Data":"f3c8a49eaf46bdaa7bc7c29067110ca78f5c34ce85f916019bad71409c21b6f9"} Jan 26 16:34:47 crc kubenswrapper[4713]: I0126 16:34:47.477326 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-plrnd/crc-debug-hfzzs" event={"ID":"f9117d8e-4e32-4838-bf73-54ca9f4ff7df","Type":"ContainerStarted","Data":"c49908755b8191c881c4285118f9d84e792d2278938c437d1a09e29686845977"} Jan 26 16:34:47 crc kubenswrapper[4713]: I0126 16:34:47.523238 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-plrnd/crc-debug-hfzzs"] Jan 26 16:34:47 crc kubenswrapper[4713]: I0126 16:34:47.533720 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-plrnd/crc-debug-hfzzs"] Jan 26 16:34:48 crc kubenswrapper[4713]: I0126 16:34:48.599621 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/crc-debug-hfzzs" Jan 26 16:34:48 crc kubenswrapper[4713]: I0126 16:34:48.642389 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9117d8e-4e32-4838-bf73-54ca9f4ff7df-host\") pod \"f9117d8e-4e32-4838-bf73-54ca9f4ff7df\" (UID: \"f9117d8e-4e32-4838-bf73-54ca9f4ff7df\") " Jan 26 16:34:48 crc kubenswrapper[4713]: I0126 16:34:48.642515 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9117d8e-4e32-4838-bf73-54ca9f4ff7df-host" (OuterVolumeSpecName: "host") pod "f9117d8e-4e32-4838-bf73-54ca9f4ff7df" (UID: "f9117d8e-4e32-4838-bf73-54ca9f4ff7df"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:34:48 crc kubenswrapper[4713]: I0126 16:34:48.642620 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w42k8\" (UniqueName: \"kubernetes.io/projected/f9117d8e-4e32-4838-bf73-54ca9f4ff7df-kube-api-access-w42k8\") pod \"f9117d8e-4e32-4838-bf73-54ca9f4ff7df\" (UID: \"f9117d8e-4e32-4838-bf73-54ca9f4ff7df\") " Jan 26 16:34:48 crc kubenswrapper[4713]: I0126 16:34:48.643236 4713 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9117d8e-4e32-4838-bf73-54ca9f4ff7df-host\") on node \"crc\" DevicePath \"\"" Jan 26 16:34:48 crc kubenswrapper[4713]: I0126 16:34:48.648627 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9117d8e-4e32-4838-bf73-54ca9f4ff7df-kube-api-access-w42k8" (OuterVolumeSpecName: "kube-api-access-w42k8") pod "f9117d8e-4e32-4838-bf73-54ca9f4ff7df" (UID: "f9117d8e-4e32-4838-bf73-54ca9f4ff7df"). InnerVolumeSpecName "kube-api-access-w42k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:34:48 crc kubenswrapper[4713]: I0126 16:34:48.744740 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w42k8\" (UniqueName: \"kubernetes.io/projected/f9117d8e-4e32-4838-bf73-54ca9f4ff7df-kube-api-access-w42k8\") on node \"crc\" DevicePath \"\"" Jan 26 16:34:49 crc kubenswrapper[4713]: I0126 16:34:49.498982 4713 scope.go:117] "RemoveContainer" containerID="f3c8a49eaf46bdaa7bc7c29067110ca78f5c34ce85f916019bad71409c21b6f9" Jan 26 16:34:49 crc kubenswrapper[4713]: I0126 16:34:49.499041 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/crc-debug-hfzzs" Jan 26 16:34:49 crc kubenswrapper[4713]: I0126 16:34:49.814388 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9117d8e-4e32-4838-bf73-54ca9f4ff7df" path="/var/lib/kubelet/pods/f9117d8e-4e32-4838-bf73-54ca9f4ff7df/volumes" Jan 26 16:34:51 crc kubenswrapper[4713]: E0126 16:34:51.271073 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8999c30c_b0b1_40ea_bffd_147fd47ce968.slice/crio-a6330f7051774de14b919704f466dba22d59c248d29d8f230ac13f74ede8c44c\": RecentStats: unable to find data in memory cache]" Jan 26 16:35:01 crc kubenswrapper[4713]: E0126 16:35:01.530981 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8999c30c_b0b1_40ea_bffd_147fd47ce968.slice/crio-a6330f7051774de14b919704f466dba22d59c248d29d8f230ac13f74ede8c44c\": RecentStats: unable to find data in memory cache]" Jan 26 16:35:03 crc kubenswrapper[4713]: I0126 16:35:03.302127 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:35:03 crc kubenswrapper[4713]: I0126 16:35:03.302510 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:35:11 crc kubenswrapper[4713]: E0126 16:35:11.797407 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8999c30c_b0b1_40ea_bffd_147fd47ce968.slice/crio-a6330f7051774de14b919704f466dba22d59c248d29d8f230ac13f74ede8c44c\": RecentStats: unable to find data in memory cache]" Jan 26 16:35:15 crc kubenswrapper[4713]: I0126 16:35:15.788531 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_a25c5d9b-6658-4b9a-8fe7-fb4b3714696e/init-config-reloader/0.log" Jan 26 16:35:15 crc kubenswrapper[4713]: I0126 16:35:15.954592 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_a25c5d9b-6658-4b9a-8fe7-fb4b3714696e/alertmanager/0.log" Jan 26 16:35:16 crc kubenswrapper[4713]: I0126 16:35:16.000736 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_a25c5d9b-6658-4b9a-8fe7-fb4b3714696e/config-reloader/0.log" Jan 26 16:35:16 crc kubenswrapper[4713]: I0126 16:35:16.001533 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_a25c5d9b-6658-4b9a-8fe7-fb4b3714696e/init-config-reloader/0.log" Jan 26 16:35:16 crc kubenswrapper[4713]: I0126 16:35:16.113678 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59866b8478-b6cbm_a611ae0d-da10-46d8-8520-0a3dd75e1d1c/barbican-api/0.log" Jan 26 16:35:16 crc kubenswrapper[4713]: I0126 16:35:16.232510 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59866b8478-b6cbm_a611ae0d-da10-46d8-8520-0a3dd75e1d1c/barbican-api-log/0.log" Jan 26 16:35:16 crc kubenswrapper[4713]: I0126 16:35:16.276740 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-f98f767bd-dxj2n_f66c4ca0-2422-43a4-b461-f7b0cd0becea/barbican-keystone-listener/0.log" Jan 26 16:35:16 crc kubenswrapper[4713]: I0126 16:35:16.473763 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c4f76bb9-7rdcn_edf16ebc-72a3-4cd6-a314-0737b0252d95/barbican-worker/0.log" Jan 26 16:35:16 crc kubenswrapper[4713]: I0126 16:35:16.485254 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-f98f767bd-dxj2n_f66c4ca0-2422-43a4-b461-f7b0cd0becea/barbican-keystone-listener-log/0.log" Jan 26 16:35:16 crc kubenswrapper[4713]: I0126 16:35:16.543980 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c4f76bb9-7rdcn_edf16ebc-72a3-4cd6-a314-0737b0252d95/barbican-worker-log/0.log" Jan 26 16:35:16 crc kubenswrapper[4713]: I0126 16:35:16.775104 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-wgcq7_3221883d-48d9-4953-aeba-4969c3ea1ed9/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:16 crc kubenswrapper[4713]: I0126 16:35:16.910145 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a194e4ba-2c4a-4d27-ad03-d8208f85cf13/ceilometer-central-agent/0.log" Jan 26 16:35:17 crc kubenswrapper[4713]: I0126 16:35:17.044324 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a194e4ba-2c4a-4d27-ad03-d8208f85cf13/ceilometer-notification-agent/0.log" Jan 26 16:35:17 crc kubenswrapper[4713]: I0126 16:35:17.062653 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a194e4ba-2c4a-4d27-ad03-d8208f85cf13/proxy-httpd/0.log" Jan 26 16:35:17 crc kubenswrapper[4713]: I0126 16:35:17.136338 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a194e4ba-2c4a-4d27-ad03-d8208f85cf13/sg-core/0.log" Jan 26 16:35:17 crc kubenswrapper[4713]: I0126 16:35:17.262687 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3a992c5f-9e04-4776-8603-5c9b4def66c7/cinder-api-log/0.log" Jan 26 16:35:17 crc kubenswrapper[4713]: I0126 16:35:17.310985 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3a992c5f-9e04-4776-8603-5c9b4def66c7/cinder-api/0.log" Jan 26 16:35:17 crc kubenswrapper[4713]: I0126 16:35:17.471106 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_59c0b6f8-caab-480e-8fd6-7e7e896efaaa/cinder-scheduler/0.log" Jan 26 16:35:17 crc kubenswrapper[4713]: I0126 16:35:17.517044 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_59c0b6f8-caab-480e-8fd6-7e7e896efaaa/probe/0.log" Jan 26 16:35:17 crc kubenswrapper[4713]: I0126 16:35:17.680279 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0/cloudkitty-api-log/0.log" Jan 26 16:35:17 crc kubenswrapper[4713]: I0126 16:35:17.771771 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_d03fb00d-d7ae-4f79-95b7-b1a8b717e2a0/cloudkitty-api/0.log" Jan 26 16:35:17 crc kubenswrapper[4713]: I0126 16:35:17.839509 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_9144f526-8060-4b3b-bf78-26babcd1d963/loki-compactor/0.log" Jan 26 16:35:18 crc kubenswrapper[4713]: I0126 16:35:18.151295 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-66dfd9bb-qngml_a1acb746-e41c-4b08-aefb-1277d7e710c9/loki-distributor/0.log" Jan 26 16:35:18 crc kubenswrapper[4713]: I0126 16:35:18.244129 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-7j8jx_d4f06dea-6c6e-4c23-a3e0-c10144d7338c/gateway/0.log" Jan 26 16:35:18 crc kubenswrapper[4713]: I0126 16:35:18.346453 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-rstdb_912dd8bd-b0f7-441d-82fe-547964030ae5/gateway/0.log" Jan 26 16:35:18 crc kubenswrapper[4713]: I0126 16:35:18.585560 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_ba185c6c-eecc-45d1-adef-b3bd7fa84686/loki-index-gateway/0.log" Jan 26 16:35:18 crc kubenswrapper[4713]: I0126 16:35:18.738166 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_a45d2a2d-be1b-476e-8fbf-f9bdd5a97301/loki-ingester/0.log" Jan 26 16:35:19 crc kubenswrapper[4713]: I0126 16:35:19.072548 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-5cd44666df-xdr68_deeee241-0904-4385-b17a-b390dfc5b2d4/loki-query-frontend/0.log" Jan 26 16:35:19 crc kubenswrapper[4713]: I0126 16:35:19.167887 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-795fd8f8cc-hvlk9_f47dae24-9ea7-4625-a367-43fd29037227/loki-querier/0.log" Jan 26 16:35:19 crc kubenswrapper[4713]: I0126 16:35:19.419412 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5nm87_fd765284-f110-48b0-b7c7-0116b2f6a5e0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:19 crc kubenswrapper[4713]: I0126 16:35:19.654997 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-v25bc_a90da253-d811-48ca-be82-642679ec25b9/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:19 crc kubenswrapper[4713]: I0126 16:35:19.820726 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-d4j8b_6c10b80b-7a08-427b-ac13-29beceb2efd3/init/0.log" Jan 26 16:35:19 crc kubenswrapper[4713]: I0126 16:35:19.984503 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-d4j8b_6c10b80b-7a08-427b-ac13-29beceb2efd3/init/0.log" Jan 26 16:35:20 crc kubenswrapper[4713]: I0126 16:35:20.051244 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-d4j8b_6c10b80b-7a08-427b-ac13-29beceb2efd3/dnsmasq-dns/0.log" Jan 26 16:35:20 crc kubenswrapper[4713]: I0126 16:35:20.072189 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-6ptxq_a590086e-4f64-45f1-8bc9-b1772bd1d7b4/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:20 crc kubenswrapper[4713]: I0126 16:35:20.318691 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ad16ac21-9aee-4776-b4fb-cb51324f625f/glance-httpd/0.log" Jan 26 16:35:20 crc kubenswrapper[4713]: I0126 16:35:20.355320 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ad16ac21-9aee-4776-b4fb-cb51324f625f/glance-log/0.log" Jan 26 16:35:20 crc kubenswrapper[4713]: I0126 16:35:20.450504 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_21a8d06f-05be-44a6-82c7-f61788570aad/glance-httpd/0.log" Jan 26 16:35:20 crc kubenswrapper[4713]: I0126 16:35:20.641329 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_21a8d06f-05be-44a6-82c7-f61788570aad/glance-log/0.log" Jan 26 16:35:20 crc kubenswrapper[4713]: I0126 16:35:20.721380 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-hm6w8_c1e12c7f-4a67-4ef8-80c4-1c24f0269834/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:20 crc kubenswrapper[4713]: I0126 16:35:20.908777 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rw26g_c3327a99-89b1-4901-b833-6c6c915839cb/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:21 crc kubenswrapper[4713]: I0126 16:35:21.110239 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490721-t2fgk_b5e1bc57-74ed-4f5e-a6e5-55cda8086cf1/keystone-cron/0.log" Jan 26 16:35:21 crc kubenswrapper[4713]: I0126 16:35:21.310517 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7d456999d-27w6v_7909e8d5-a534-4178-9f85-70c7b10eae4e/keystone-api/0.log" Jan 26 16:35:21 crc kubenswrapper[4713]: I0126 16:35:21.388387 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_b32ad743-8c23-46d2-83aa-4eef34971aa7/kube-state-metrics/0.log" Jan 26 16:35:21 crc kubenswrapper[4713]: I0126 16:35:21.583979 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4dnd8_ab00c6e0-12fb-4e99-be6b-ca341fbfb235/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:21 crc kubenswrapper[4713]: I0126 16:35:21.931396 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-587f599955-5k56n_df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a/neutron-httpd/0.log" Jan 26 16:35:22 crc kubenswrapper[4713]: I0126 16:35:22.078472 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-587f599955-5k56n_df8a26d4-1b5e-4dcd-894d-dd736d8a4a9a/neutron-api/0.log" Jan 26 16:35:22 crc kubenswrapper[4713]: E0126 16:35:22.107085 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8999c30c_b0b1_40ea_bffd_147fd47ce968.slice/crio-a6330f7051774de14b919704f466dba22d59c248d29d8f230ac13f74ede8c44c\": RecentStats: unable to find data in memory cache]" Jan 26 16:35:22 crc kubenswrapper[4713]: I0126 16:35:22.135436 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-2cxbl_409601d1-035c-435e-a892-4cb0a2f6760e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:22 crc kubenswrapper[4713]: I0126 16:35:22.789812 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_54c69470-fd8d-4553-a1d3-4db65c424a2f/nova-api-log/0.log" Jan 26 16:35:22 crc kubenswrapper[4713]: I0126 16:35:22.846680 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_8acd4ad8-e9b7-4f39-9db6-f7139861e1c3/nova-cell0-conductor-conductor/0.log" Jan 26 16:35:23 crc kubenswrapper[4713]: I0126 16:35:23.031681 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_54c69470-fd8d-4553-a1d3-4db65c424a2f/nova-api-api/0.log" Jan 26 16:35:23 crc kubenswrapper[4713]: I0126 16:35:23.303865 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_10eb8f34-03cf-4b24-b8fd-63fe3886d2d9/nova-cell1-conductor-conductor/0.log" Jan 26 16:35:23 crc kubenswrapper[4713]: I0126 16:35:23.320505 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6fc31f1a-f23d-4efd-bf16-3796bc2a948d/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 16:35:23 crc kubenswrapper[4713]: I0126 16:35:23.603140 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-x44rg_9cbdcf66-dfbd-43c4-a6b6-3e450dfb70e7/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:24 crc kubenswrapper[4713]: I0126 16:35:24.056546 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fc59197a-2a96-4fe1-a320-f285fb456203/nova-metadata-log/0.log" Jan 26 16:35:24 crc kubenswrapper[4713]: I0126 16:35:24.715877 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_86f768fe-d211-44f1-9341-6d596fc18452/nova-scheduler-scheduler/0.log" Jan 26 16:35:24 crc kubenswrapper[4713]: I0126 16:35:24.858205 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5bba60c2-25f6-41a7-a231-51fc5a6a9d3b/mysql-bootstrap/0.log" Jan 26 16:35:25 crc kubenswrapper[4713]: I0126 16:35:25.139783 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5bba60c2-25f6-41a7-a231-51fc5a6a9d3b/mysql-bootstrap/0.log" Jan 26 16:35:25 crc kubenswrapper[4713]: I0126 16:35:25.194400 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5bba60c2-25f6-41a7-a231-51fc5a6a9d3b/galera/0.log" Jan 26 16:35:25 crc kubenswrapper[4713]: I0126 16:35:25.318138 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fc59197a-2a96-4fe1-a320-f285fb456203/nova-metadata-metadata/0.log" Jan 26 16:35:25 crc kubenswrapper[4713]: I0126 16:35:25.455541 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_cf79fdc1-80c7-4f65-98e0-b08803c07edc/mysql-bootstrap/0.log" Jan 26 16:35:25 crc kubenswrapper[4713]: I0126 16:35:25.709459 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_cf79fdc1-80c7-4f65-98e0-b08803c07edc/mysql-bootstrap/0.log" Jan 26 16:35:25 crc kubenswrapper[4713]: I0126 16:35:25.758799 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_cf79fdc1-80c7-4f65-98e0-b08803c07edc/galera/0.log" Jan 26 16:35:26 crc kubenswrapper[4713]: I0126 16:35:26.015131 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5ee23a80-20ad-45b5-9670-c165085175ab/openstackclient/0.log" Jan 26 16:35:26 crc kubenswrapper[4713]: I0126 16:35:26.096919 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-c9tvd_518d38d7-b30e-4d67-a3d7-456e26fc9869/ovn-controller/0.log" Jan 26 16:35:26 crc kubenswrapper[4713]: I0126 16:35:26.219641 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-ht5fq_499db69b-0e82-43b8-99e0-262258615861/openstack-network-exporter/0.log" Jan 26 16:35:26 crc kubenswrapper[4713]: I0126 16:35:26.437391 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rl7z9_d161dabd-5253-4929-998e-07f3d465a03d/ovsdb-server-init/0.log" Jan 26 16:35:26 crc kubenswrapper[4713]: I0126 16:35:26.645428 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rl7z9_d161dabd-5253-4929-998e-07f3d465a03d/ovs-vswitchd/0.log" Jan 26 16:35:26 crc kubenswrapper[4713]: I0126 16:35:26.649684 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rl7z9_d161dabd-5253-4929-998e-07f3d465a03d/ovsdb-server/0.log" Jan 26 16:35:26 crc kubenswrapper[4713]: I0126 16:35:26.650862 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rl7z9_d161dabd-5253-4929-998e-07f3d465a03d/ovsdb-server-init/0.log" Jan 26 16:35:26 crc kubenswrapper[4713]: I0126 16:35:26.995970 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-g6c6s_b67e1167-2e6c-4061-a95f-61fed731f252/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:27 crc kubenswrapper[4713]: I0126 16:35:27.203770 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e6820209-510a-4346-b86d-006535127cc9/ovn-northd/0.log" Jan 26 16:35:27 crc kubenswrapper[4713]: I0126 16:35:27.218043 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e6820209-510a-4346-b86d-006535127cc9/openstack-network-exporter/0.log" Jan 26 16:35:27 crc kubenswrapper[4713]: I0126 16:35:27.409553 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4567e561-0bd8-4368-8868-e2531d7bb8d3/openstack-network-exporter/0.log" Jan 26 16:35:27 crc kubenswrapper[4713]: I0126 16:35:27.412672 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4567e561-0bd8-4368-8868-e2531d7bb8d3/ovsdbserver-nb/0.log" Jan 26 16:35:27 crc kubenswrapper[4713]: I0126 16:35:27.840914 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4a0b03b5-597a-4c59-9784-218e9f9442d1/openstack-network-exporter/0.log" Jan 26 16:35:28 crc kubenswrapper[4713]: I0126 16:35:28.034973 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4a0b03b5-597a-4c59-9784-218e9f9442d1/ovsdbserver-sb/0.log" Jan 26 16:35:28 crc kubenswrapper[4713]: I0126 16:35:28.345143 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6bb4458d9d-r4dmr_bf40e2be-eb43-4c3d-aa4e-58c164059384/placement-api/0.log" Jan 26 16:35:28 crc kubenswrapper[4713]: I0126 16:35:28.434048 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6bb4458d9d-r4dmr_bf40e2be-eb43-4c3d-aa4e-58c164059384/placement-log/0.log" Jan 26 16:35:28 crc kubenswrapper[4713]: I0126 16:35:28.555083 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3aa01a31-895a-4fcd-845b-264c0cec88de/init-config-reloader/0.log" Jan 26 16:35:28 crc kubenswrapper[4713]: I0126 16:35:28.756651 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3aa01a31-895a-4fcd-845b-264c0cec88de/init-config-reloader/0.log" Jan 26 16:35:28 crc kubenswrapper[4713]: I0126 16:35:28.771531 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3aa01a31-895a-4fcd-845b-264c0cec88de/config-reloader/0.log" Jan 26 16:35:28 crc kubenswrapper[4713]: I0126 16:35:28.776168 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3aa01a31-895a-4fcd-845b-264c0cec88de/prometheus/0.log" Jan 26 16:35:29 crc kubenswrapper[4713]: I0126 16:35:29.038399 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3aa01a31-895a-4fcd-845b-264c0cec88de/thanos-sidecar/0.log" Jan 26 16:35:29 crc kubenswrapper[4713]: I0126 16:35:29.132242 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_43b98a31-5771-411a-b08d-1c3f17c50a4d/setup-container/0.log" Jan 26 16:35:29 crc kubenswrapper[4713]: I0126 16:35:29.341552 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_43b98a31-5771-411a-b08d-1c3f17c50a4d/rabbitmq/0.log" Jan 26 16:35:29 crc kubenswrapper[4713]: I0126 16:35:29.345144 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_43b98a31-5771-411a-b08d-1c3f17c50a4d/setup-container/0.log" Jan 26 16:35:29 crc kubenswrapper[4713]: I0126 16:35:29.534320 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_36f2aa2e-c567-4d86-b3d6-c3572a45ccd1/setup-container/0.log" Jan 26 16:35:29 crc kubenswrapper[4713]: I0126 16:35:29.792791 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_36f2aa2e-c567-4d86-b3d6-c3572a45ccd1/setup-container/0.log" Jan 26 16:35:29 crc kubenswrapper[4713]: I0126 16:35:29.852949 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_36f2aa2e-c567-4d86-b3d6-c3572a45ccd1/rabbitmq/0.log" Jan 26 16:35:30 crc kubenswrapper[4713]: I0126 16:35:30.017759 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-g9s7s_3b5b4774-7255-4b3d-ade6-994be4687006/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:30 crc kubenswrapper[4713]: I0126 16:35:30.096741 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-d66m2_ca744311-cd43-444e-ba20-ad3a2e26a7a4/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:30 crc kubenswrapper[4713]: I0126 16:35:30.397230 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-75fjv_1bb061f5-90cb-4f19-a0e4-3fd295a232a2/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:30 crc kubenswrapper[4713]: I0126 16:35:30.619644 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-94nfq_f846eed5-7039-4b1b-b45f-ca6363c482a5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:30 crc kubenswrapper[4713]: I0126 16:35:30.732534 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9tw4m_f48dd46c-3b9a-484b-887f-e916f70a7123/ssh-known-hosts-edpm-deployment/0.log" Jan 26 16:35:30 crc kubenswrapper[4713]: I0126 16:35:30.972777 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-56d946d655-hw5fz_0e6ea6b1-cd00-4552-8a20-cfb0055b58dc/proxy-server/0.log" Jan 26 16:35:31 crc kubenswrapper[4713]: I0126 16:35:31.074686 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-56d946d655-hw5fz_0e6ea6b1-cd00-4552-8a20-cfb0055b58dc/proxy-httpd/0.log" Jan 26 16:35:31 crc kubenswrapper[4713]: I0126 16:35:31.189714 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-7x6k4_125bdff8-6eff-4f59-9cc4-c986c5771aa0/swift-ring-rebalance/0.log" Jan 26 16:35:31 crc kubenswrapper[4713]: I0126 16:35:31.463463 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0432b2d-538e-4b04-899b-6fe666f340de/account-auditor/0.log" Jan 26 16:35:31 crc kubenswrapper[4713]: I0126 16:35:31.533614 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0432b2d-538e-4b04-899b-6fe666f340de/account-reaper/0.log" Jan 26 16:35:31 crc kubenswrapper[4713]: I0126 16:35:31.765783 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0432b2d-538e-4b04-899b-6fe666f340de/account-server/0.log" Jan 26 16:35:31 crc kubenswrapper[4713]: I0126 16:35:31.780833 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0432b2d-538e-4b04-899b-6fe666f340de/account-replicator/0.log" Jan 26 16:35:31 crc kubenswrapper[4713]: I0126 16:35:31.872283 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0432b2d-538e-4b04-899b-6fe666f340de/container-auditor/0.log" Jan 26 16:35:32 crc kubenswrapper[4713]: I0126 16:35:32.007480 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0432b2d-538e-4b04-899b-6fe666f340de/container-replicator/0.log" Jan 26 16:35:32 crc kubenswrapper[4713]: I0126 16:35:32.039279 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0432b2d-538e-4b04-899b-6fe666f340de/container-server/0.log" Jan 26 16:35:32 crc kubenswrapper[4713]: I0126 16:35:32.085920 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0432b2d-538e-4b04-899b-6fe666f340de/container-updater/0.log" Jan 26 16:35:32 crc kubenswrapper[4713]: I0126 16:35:32.202649 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0432b2d-538e-4b04-899b-6fe666f340de/object-auditor/0.log" Jan 26 16:35:32 crc kubenswrapper[4713]: I0126 16:35:32.332006 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0432b2d-538e-4b04-899b-6fe666f340de/object-expirer/0.log" Jan 26 16:35:32 crc kubenswrapper[4713]: I0126 16:35:32.371186 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0432b2d-538e-4b04-899b-6fe666f340de/object-replicator/0.log" Jan 26 16:35:32 crc kubenswrapper[4713]: E0126 16:35:32.470786 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8999c30c_b0b1_40ea_bffd_147fd47ce968.slice/crio-a6330f7051774de14b919704f466dba22d59c248d29d8f230ac13f74ede8c44c\": RecentStats: unable to find data in memory cache]" Jan 26 16:35:32 crc kubenswrapper[4713]: I0126 16:35:32.473316 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0432b2d-538e-4b04-899b-6fe666f340de/object-server/0.log" Jan 26 16:35:32 crc kubenswrapper[4713]: I0126 16:35:32.616261 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0432b2d-538e-4b04-899b-6fe666f340de/object-updater/0.log" Jan 26 16:35:32 crc kubenswrapper[4713]: I0126 16:35:32.676800 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0432b2d-538e-4b04-899b-6fe666f340de/rsync/0.log" Jan 26 16:35:32 crc kubenswrapper[4713]: I0126 16:35:32.754870 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0432b2d-538e-4b04-899b-6fe666f340de/swift-recon-cron/0.log" Jan 26 16:35:33 crc kubenswrapper[4713]: I0126 16:35:33.027852 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-jcc4b_a4c0ccc6-3259-4551-be60-b8b5599884fa/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:33 crc kubenswrapper[4713]: I0126 16:35:33.142343 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_b9ed8b20-616a-49b3-b0bb-ad86c228de84/tempest-tests-tempest-tests-runner/0.log" Jan 26 16:35:33 crc kubenswrapper[4713]: I0126 16:35:33.258078 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e4e74714-800f-4449-931f-c2473dbd60d5/test-operator-logs-container/0.log" Jan 26 16:35:33 crc kubenswrapper[4713]: I0126 16:35:33.301053 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:35:33 crc kubenswrapper[4713]: I0126 16:35:33.301400 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:35:33 crc kubenswrapper[4713]: I0126 16:35:33.301534 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 16:35:33 crc kubenswrapper[4713]: I0126 16:35:33.302435 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de"} pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:35:33 crc kubenswrapper[4713]: I0126 16:35:33.302581 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" containerID="cri-o://55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" gracePeriod=600 Jan 26 16:35:33 crc kubenswrapper[4713]: I0126 16:35:33.423202 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_202dfc25-10dd-4c42-9c53-ccc3220a140b/cloudkitty-proc/0.log" Jan 26 16:35:33 crc kubenswrapper[4713]: E0126 16:35:33.437081 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:35:33 crc kubenswrapper[4713]: I0126 16:35:33.484177 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-78h78_1393b971-2819-450f-a44b-978658f849e5/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 16:35:33 crc kubenswrapper[4713]: I0126 16:35:33.926244 4713 generic.go:334] "Generic (PLEG): container finished" podID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" exitCode=0 Jan 26 16:35:33 crc kubenswrapper[4713]: I0126 16:35:33.926292 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerDied","Data":"55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de"} Jan 26 16:35:33 crc kubenswrapper[4713]: I0126 16:35:33.926620 4713 scope.go:117] "RemoveContainer" containerID="c312980dd2a4333984ac586a57b2840623b9ab4a72d766eeb1ce1d72aca22abb" Jan 26 16:35:33 crc kubenswrapper[4713]: I0126 16:35:33.927287 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:35:33 crc kubenswrapper[4713]: E0126 16:35:33.927570 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:35:39 crc kubenswrapper[4713]: I0126 16:35:39.287977 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_6637e535-e95f-407f-a97d-11da8ad9629c/memcached/0.log" Jan 26 16:35:42 crc kubenswrapper[4713]: E0126 16:35:42.761355 4713 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8999c30c_b0b1_40ea_bffd_147fd47ce968.slice/crio-a6330f7051774de14b919704f466dba22d59c248d29d8f230ac13f74ede8c44c\": RecentStats: unable to find data in memory cache]" Jan 26 16:35:45 crc kubenswrapper[4713]: E0126 16:35:45.837650 4713 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9e22c3c9ceabf8d0bb0e4a7193ff483b325ee8dd611d7a68abe02eacaa2a324c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9e22c3c9ceabf8d0bb0e4a7193ff483b325ee8dd611d7a68abe02eacaa2a324c/diff: no such file or directory, extraDiskErr: Jan 26 16:35:46 crc kubenswrapper[4713]: I0126 16:35:46.804067 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:35:46 crc kubenswrapper[4713]: E0126 16:35:46.804301 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:35:57 crc kubenswrapper[4713]: I0126 16:35:57.803747 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:35:57 crc kubenswrapper[4713]: E0126 16:35:57.804622 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:36:02 crc kubenswrapper[4713]: I0126 16:36:02.702978 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-8sbhh_6d87a00d-b4a5-449e-b744-d9680cbba82e/manager/0.log" Jan 26 16:36:02 crc kubenswrapper[4713]: I0126 16:36:02.946189 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp_db0b8456-060e-49fe-bbe7-12d695b3a3dc/util/0.log" Jan 26 16:36:03 crc kubenswrapper[4713]: I0126 16:36:03.204344 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp_db0b8456-060e-49fe-bbe7-12d695b3a3dc/util/0.log" Jan 26 16:36:03 crc kubenswrapper[4713]: I0126 16:36:03.260240 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp_db0b8456-060e-49fe-bbe7-12d695b3a3dc/pull/0.log" Jan 26 16:36:03 crc kubenswrapper[4713]: I0126 16:36:03.318734 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp_db0b8456-060e-49fe-bbe7-12d695b3a3dc/pull/0.log" Jan 26 16:36:03 crc kubenswrapper[4713]: I0126 16:36:03.481302 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp_db0b8456-060e-49fe-bbe7-12d695b3a3dc/pull/0.log" Jan 26 16:36:03 crc kubenswrapper[4713]: I0126 16:36:03.505513 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp_db0b8456-060e-49fe-bbe7-12d695b3a3dc/util/0.log" Jan 26 16:36:03 crc kubenswrapper[4713]: I0126 16:36:03.513581 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16frjtxp_db0b8456-060e-49fe-bbe7-12d695b3a3dc/extract/0.log" Jan 26 16:36:03 crc kubenswrapper[4713]: I0126 16:36:03.694230 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-fzsfn_ad9b077e-c81e-4cf5-bc8d-c7405e7b25c4/manager/0.log" Jan 26 16:36:03 crc kubenswrapper[4713]: I0126 16:36:03.716461 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-cqv2q_6fab0ebc-dfbb-45f5-9802-5cf0145acf7b/manager/0.log" Jan 26 16:36:03 crc kubenswrapper[4713]: I0126 16:36:03.935451 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-gjzk8_fed44574-f4a7-42df-9179-b2f8a64d180e/manager/0.log" Jan 26 16:36:03 crc kubenswrapper[4713]: I0126 16:36:03.973700 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-q9h27_c967ecd2-cf7b-428e-8e86-320c481901fd/manager/0.log" Jan 26 16:36:04 crc kubenswrapper[4713]: I0126 16:36:04.186608 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-7hgnh_a4e0ef5f-5c6e-4ceb-80c2-25769c178450/manager/0.log" Jan 26 16:36:04 crc kubenswrapper[4713]: I0126 16:36:04.487957 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-rgk5d_d4485006-069c-45c8-8515-ff65913e2d54/manager/0.log" Jan 26 16:36:04 crc kubenswrapper[4713]: I0126 16:36:04.541950 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-pnkxb_21c903f2-40b2-420b-830c-64298a2a77bb/manager/0.log" Jan 26 16:36:04 crc kubenswrapper[4713]: I0126 16:36:04.713209 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-6kns8_83ccbdec-a448-4674-896e-9c634981df65/manager/0.log" Jan 26 16:36:04 crc kubenswrapper[4713]: I0126 16:36:04.757195 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-rvxn4_c9b5d10a-9eac-4ecf-b3c6-297e15d1f6ed/manager/0.log" Jan 26 16:36:04 crc kubenswrapper[4713]: I0126 16:36:04.917760 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-nwz8j_51c3ef5e-a43e-4c76-aab9-ec9d22939005/manager/0.log" Jan 26 16:36:05 crc kubenswrapper[4713]: I0126 16:36:05.002287 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-lnw6c_67c02797-1141-4757-aa6e-de1678f8cf47/manager/0.log" Jan 26 16:36:05 crc kubenswrapper[4713]: I0126 16:36:05.204591 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-7qhb9_bcb2380b-e7a0-4f46-b6cb-23a57fa36fba/manager/0.log" Jan 26 16:36:05 crc kubenswrapper[4713]: I0126 16:36:05.284143 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-g42hg_525d44f1-86e8-4e11-8022-d428ed5a8440/manager/0.log" Jan 26 16:36:05 crc kubenswrapper[4713]: I0126 16:36:05.457541 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854dm59m_a4cc3f25-acc8-4ce3-8269-2ccb7f042709/manager/0.log" Jan 26 16:36:05 crc kubenswrapper[4713]: I0126 16:36:05.736385 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-8f6df5568-zmrnb_b41b6a3b-8d2a-4213-a114-f84a4ca574c0/operator/0.log" Jan 26 16:36:05 crc kubenswrapper[4713]: I0126 16:36:05.880988 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-k8jhj_b2c70989-5c15-405f-b07d-4ae1a6160f6a/registry-server/0.log" Jan 26 16:36:06 crc kubenswrapper[4713]: I0126 16:36:06.329243 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-cndqq_feea11ba-0497-418d-8316-8510b6d807bb/manager/0.log" Jan 26 16:36:06 crc kubenswrapper[4713]: I0126 16:36:06.701186 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-b6h7z_d4a41bce-dc81-49f2-80a7-06545140458d/manager/0.log" Jan 26 16:36:06 crc kubenswrapper[4713]: I0126 16:36:06.811964 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-44jqj_3161c386-6b19-4c7e-aa02-8a95984cc71c/operator/0.log" Jan 26 16:36:06 crc kubenswrapper[4713]: I0126 16:36:06.813321 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7d6b58b596-rpgqj_a523ff90-92c7-49b5-a532-20d7b7246892/manager/0.log" Jan 26 16:36:06 crc kubenswrapper[4713]: I0126 16:36:06.986038 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-lhfk5_e9395a15-d653-40bb-bb55-8a800b1a0dae/manager/0.log" Jan 26 16:36:07 crc kubenswrapper[4713]: I0126 16:36:07.228961 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-t6l4x_0e1fcfa7-ee98-4834-93b3-578a9463adca/manager/0.log" Jan 26 16:36:07 crc kubenswrapper[4713]: I0126 16:36:07.326707 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-gbmmw_69bffd4a-b644-47b2-90ba-83716eb3b40b/manager/0.log" Jan 26 16:36:07 crc kubenswrapper[4713]: I0126 16:36:07.371743 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fd4748d4d-2q6vz_a1e3d291-c14b-4645-9c72-dca8413eb5e7/manager/0.log" Jan 26 16:36:09 crc kubenswrapper[4713]: I0126 16:36:09.804293 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:36:09 crc kubenswrapper[4713]: E0126 16:36:09.804858 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:36:20 crc kubenswrapper[4713]: I0126 16:36:20.804356 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:36:20 crc kubenswrapper[4713]: E0126 16:36:20.805085 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:36:29 crc kubenswrapper[4713]: I0126 16:36:29.787153 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6g5mq"] Jan 26 16:36:29 crc kubenswrapper[4713]: E0126 16:36:29.788238 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9117d8e-4e32-4838-bf73-54ca9f4ff7df" containerName="container-00" Jan 26 16:36:29 crc kubenswrapper[4713]: I0126 16:36:29.788256 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9117d8e-4e32-4838-bf73-54ca9f4ff7df" containerName="container-00" Jan 26 16:36:29 crc kubenswrapper[4713]: I0126 16:36:29.788547 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9117d8e-4e32-4838-bf73-54ca9f4ff7df" containerName="container-00" Jan 26 16:36:29 crc kubenswrapper[4713]: I0126 16:36:29.793297 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:29 crc kubenswrapper[4713]: I0126 16:36:29.825085 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6g5mq"] Jan 26 16:36:29 crc kubenswrapper[4713]: I0126 16:36:29.923272 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec1684b5-ae4a-400b-95b4-37a555921c04-utilities\") pod \"redhat-marketplace-6g5mq\" (UID: \"ec1684b5-ae4a-400b-95b4-37a555921c04\") " pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:29 crc kubenswrapper[4713]: I0126 16:36:29.923752 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4jdr\" (UniqueName: \"kubernetes.io/projected/ec1684b5-ae4a-400b-95b4-37a555921c04-kube-api-access-s4jdr\") pod \"redhat-marketplace-6g5mq\" (UID: \"ec1684b5-ae4a-400b-95b4-37a555921c04\") " pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:29 crc kubenswrapper[4713]: I0126 16:36:29.923783 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec1684b5-ae4a-400b-95b4-37a555921c04-catalog-content\") pod \"redhat-marketplace-6g5mq\" (UID: \"ec1684b5-ae4a-400b-95b4-37a555921c04\") " pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:30 crc kubenswrapper[4713]: I0126 16:36:30.025840 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4jdr\" (UniqueName: \"kubernetes.io/projected/ec1684b5-ae4a-400b-95b4-37a555921c04-kube-api-access-s4jdr\") pod \"redhat-marketplace-6g5mq\" (UID: \"ec1684b5-ae4a-400b-95b4-37a555921c04\") " pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:30 crc kubenswrapper[4713]: I0126 16:36:30.026172 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec1684b5-ae4a-400b-95b4-37a555921c04-catalog-content\") pod \"redhat-marketplace-6g5mq\" (UID: \"ec1684b5-ae4a-400b-95b4-37a555921c04\") " pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:30 crc kubenswrapper[4713]: I0126 16:36:30.026288 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec1684b5-ae4a-400b-95b4-37a555921c04-utilities\") pod \"redhat-marketplace-6g5mq\" (UID: \"ec1684b5-ae4a-400b-95b4-37a555921c04\") " pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:30 crc kubenswrapper[4713]: I0126 16:36:30.026872 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec1684b5-ae4a-400b-95b4-37a555921c04-utilities\") pod \"redhat-marketplace-6g5mq\" (UID: \"ec1684b5-ae4a-400b-95b4-37a555921c04\") " pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:30 crc kubenswrapper[4713]: I0126 16:36:30.027133 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec1684b5-ae4a-400b-95b4-37a555921c04-catalog-content\") pod \"redhat-marketplace-6g5mq\" (UID: \"ec1684b5-ae4a-400b-95b4-37a555921c04\") " pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:30 crc kubenswrapper[4713]: I0126 16:36:30.051318 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4jdr\" (UniqueName: \"kubernetes.io/projected/ec1684b5-ae4a-400b-95b4-37a555921c04-kube-api-access-s4jdr\") pod \"redhat-marketplace-6g5mq\" (UID: \"ec1684b5-ae4a-400b-95b4-37a555921c04\") " pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:30 crc kubenswrapper[4713]: I0126 16:36:30.138020 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:30 crc kubenswrapper[4713]: I0126 16:36:30.222908 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-2rd4s_c9e722bd-c443-4cb6-8104-e630a4c0b58f/control-plane-machine-set-operator/0.log" Jan 26 16:36:30 crc kubenswrapper[4713]: I0126 16:36:30.485392 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ss5h8_9c219134-328d-4145-8dd2-3f01df03a055/kube-rbac-proxy/0.log" Jan 26 16:36:30 crc kubenswrapper[4713]: I0126 16:36:30.649262 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ss5h8_9c219134-328d-4145-8dd2-3f01df03a055/machine-api-operator/0.log" Jan 26 16:36:30 crc kubenswrapper[4713]: I0126 16:36:30.725954 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6g5mq"] Jan 26 16:36:31 crc kubenswrapper[4713]: I0126 16:36:31.568341 4713 generic.go:334] "Generic (PLEG): container finished" podID="ec1684b5-ae4a-400b-95b4-37a555921c04" containerID="8cafb09589f798e2e792d53d76953b959cfde460eb98af07778be06975c00696" exitCode=0 Jan 26 16:36:31 crc kubenswrapper[4713]: I0126 16:36:31.568588 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6g5mq" event={"ID":"ec1684b5-ae4a-400b-95b4-37a555921c04","Type":"ContainerDied","Data":"8cafb09589f798e2e792d53d76953b959cfde460eb98af07778be06975c00696"} Jan 26 16:36:31 crc kubenswrapper[4713]: I0126 16:36:31.568898 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6g5mq" event={"ID":"ec1684b5-ae4a-400b-95b4-37a555921c04","Type":"ContainerStarted","Data":"75631f6de29fe3f7a0d1bcff05bcd1a2c0ba1f65beb651de1ac8c0536826a9c0"} Jan 26 16:36:32 crc kubenswrapper[4713]: I0126 16:36:32.579751 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6g5mq" event={"ID":"ec1684b5-ae4a-400b-95b4-37a555921c04","Type":"ContainerStarted","Data":"ad275042344369b3f11d068a80a1873944fdc1097e961f4f6d00416402486db5"} Jan 26 16:36:32 crc kubenswrapper[4713]: I0126 16:36:32.803794 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:36:32 crc kubenswrapper[4713]: E0126 16:36:32.804166 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:36:33 crc kubenswrapper[4713]: I0126 16:36:33.593312 4713 generic.go:334] "Generic (PLEG): container finished" podID="ec1684b5-ae4a-400b-95b4-37a555921c04" containerID="ad275042344369b3f11d068a80a1873944fdc1097e961f4f6d00416402486db5" exitCode=0 Jan 26 16:36:33 crc kubenswrapper[4713]: I0126 16:36:33.593415 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6g5mq" event={"ID":"ec1684b5-ae4a-400b-95b4-37a555921c04","Type":"ContainerDied","Data":"ad275042344369b3f11d068a80a1873944fdc1097e961f4f6d00416402486db5"} Jan 26 16:36:34 crc kubenswrapper[4713]: I0126 16:36:34.190652 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kzwjw"] Jan 26 16:36:34 crc kubenswrapper[4713]: I0126 16:36:34.194434 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:34 crc kubenswrapper[4713]: I0126 16:36:34.249793 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kzwjw"] Jan 26 16:36:34 crc kubenswrapper[4713]: I0126 16:36:34.326032 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4b2502-f694-41a5-b1af-bb02c6110d1a-catalog-content\") pod \"certified-operators-kzwjw\" (UID: \"9d4b2502-f694-41a5-b1af-bb02c6110d1a\") " pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:34 crc kubenswrapper[4713]: I0126 16:36:34.326479 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwgbh\" (UniqueName: \"kubernetes.io/projected/9d4b2502-f694-41a5-b1af-bb02c6110d1a-kube-api-access-xwgbh\") pod \"certified-operators-kzwjw\" (UID: \"9d4b2502-f694-41a5-b1af-bb02c6110d1a\") " pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:34 crc kubenswrapper[4713]: I0126 16:36:34.326638 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4b2502-f694-41a5-b1af-bb02c6110d1a-utilities\") pod \"certified-operators-kzwjw\" (UID: \"9d4b2502-f694-41a5-b1af-bb02c6110d1a\") " pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:34 crc kubenswrapper[4713]: I0126 16:36:34.428881 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwgbh\" (UniqueName: \"kubernetes.io/projected/9d4b2502-f694-41a5-b1af-bb02c6110d1a-kube-api-access-xwgbh\") pod \"certified-operators-kzwjw\" (UID: \"9d4b2502-f694-41a5-b1af-bb02c6110d1a\") " pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:34 crc kubenswrapper[4713]: I0126 16:36:34.429293 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4b2502-f694-41a5-b1af-bb02c6110d1a-utilities\") pod \"certified-operators-kzwjw\" (UID: \"9d4b2502-f694-41a5-b1af-bb02c6110d1a\") " pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:34 crc kubenswrapper[4713]: I0126 16:36:34.429423 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4b2502-f694-41a5-b1af-bb02c6110d1a-catalog-content\") pod \"certified-operators-kzwjw\" (UID: \"9d4b2502-f694-41a5-b1af-bb02c6110d1a\") " pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:34 crc kubenswrapper[4713]: I0126 16:36:34.430079 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4b2502-f694-41a5-b1af-bb02c6110d1a-utilities\") pod \"certified-operators-kzwjw\" (UID: \"9d4b2502-f694-41a5-b1af-bb02c6110d1a\") " pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:34 crc kubenswrapper[4713]: I0126 16:36:34.430147 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4b2502-f694-41a5-b1af-bb02c6110d1a-catalog-content\") pod \"certified-operators-kzwjw\" (UID: \"9d4b2502-f694-41a5-b1af-bb02c6110d1a\") " pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:34 crc kubenswrapper[4713]: I0126 16:36:34.612677 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6g5mq" event={"ID":"ec1684b5-ae4a-400b-95b4-37a555921c04","Type":"ContainerStarted","Data":"96364f3dcec0164acd404c4647a2f02fcd13097c1379e5a3e9df00886c8bc452"} Jan 26 16:36:34 crc kubenswrapper[4713]: I0126 16:36:34.650213 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6g5mq" podStartSLOduration=3.211092994 podStartE2EDuration="5.650177935s" podCreationTimestamp="2026-01-26 16:36:29 +0000 UTC" firstStartedPulling="2026-01-26 16:36:31.573402645 +0000 UTC m=+3766.710419880" lastFinishedPulling="2026-01-26 16:36:34.012487586 +0000 UTC m=+3769.149504821" observedRunningTime="2026-01-26 16:36:34.634493795 +0000 UTC m=+3769.771511040" watchObservedRunningTime="2026-01-26 16:36:34.650177935 +0000 UTC m=+3769.787195170" Jan 26 16:36:34 crc kubenswrapper[4713]: I0126 16:36:34.883156 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwgbh\" (UniqueName: \"kubernetes.io/projected/9d4b2502-f694-41a5-b1af-bb02c6110d1a-kube-api-access-xwgbh\") pod \"certified-operators-kzwjw\" (UID: \"9d4b2502-f694-41a5-b1af-bb02c6110d1a\") " pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:35 crc kubenswrapper[4713]: I0126 16:36:35.133626 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:35 crc kubenswrapper[4713]: I0126 16:36:35.652741 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kzwjw"] Jan 26 16:36:35 crc kubenswrapper[4713]: W0126 16:36:35.664811 4713 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d4b2502_f694_41a5_b1af_bb02c6110d1a.slice/crio-8b36b20830d480dd3c5d6afad0f0d5eb5607856d0f61a1bee156b182c214cbc8 WatchSource:0}: Error finding container 8b36b20830d480dd3c5d6afad0f0d5eb5607856d0f61a1bee156b182c214cbc8: Status 404 returned error can't find the container with id 8b36b20830d480dd3c5d6afad0f0d5eb5607856d0f61a1bee156b182c214cbc8 Jan 26 16:36:36 crc kubenswrapper[4713]: I0126 16:36:36.630915 4713 generic.go:334] "Generic (PLEG): container finished" podID="9d4b2502-f694-41a5-b1af-bb02c6110d1a" containerID="7b446609ee88e90b65d05f3c179615f6fab18b2870007f2b51ccd3b15796eb8f" exitCode=0 Jan 26 16:36:36 crc kubenswrapper[4713]: I0126 16:36:36.631169 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzwjw" event={"ID":"9d4b2502-f694-41a5-b1af-bb02c6110d1a","Type":"ContainerDied","Data":"7b446609ee88e90b65d05f3c179615f6fab18b2870007f2b51ccd3b15796eb8f"} Jan 26 16:36:36 crc kubenswrapper[4713]: I0126 16:36:36.631195 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzwjw" event={"ID":"9d4b2502-f694-41a5-b1af-bb02c6110d1a","Type":"ContainerStarted","Data":"8b36b20830d480dd3c5d6afad0f0d5eb5607856d0f61a1bee156b182c214cbc8"} Jan 26 16:36:37 crc kubenswrapper[4713]: I0126 16:36:37.641956 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzwjw" event={"ID":"9d4b2502-f694-41a5-b1af-bb02c6110d1a","Type":"ContainerStarted","Data":"edcb6b5c8dfedb86345a9d0c06d40109df802ab9810b00ccf7de9b6c3e01c5b2"} Jan 26 16:36:38 crc kubenswrapper[4713]: I0126 16:36:38.653417 4713 generic.go:334] "Generic (PLEG): container finished" podID="9d4b2502-f694-41a5-b1af-bb02c6110d1a" containerID="edcb6b5c8dfedb86345a9d0c06d40109df802ab9810b00ccf7de9b6c3e01c5b2" exitCode=0 Jan 26 16:36:38 crc kubenswrapper[4713]: I0126 16:36:38.653484 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzwjw" event={"ID":"9d4b2502-f694-41a5-b1af-bb02c6110d1a","Type":"ContainerDied","Data":"edcb6b5c8dfedb86345a9d0c06d40109df802ab9810b00ccf7de9b6c3e01c5b2"} Jan 26 16:36:39 crc kubenswrapper[4713]: I0126 16:36:39.676786 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzwjw" event={"ID":"9d4b2502-f694-41a5-b1af-bb02c6110d1a","Type":"ContainerStarted","Data":"38434264faa68d492db3761cdc640cc74ba2e557cb0ed5f6a5d8d0fd25a9ae16"} Jan 26 16:36:39 crc kubenswrapper[4713]: I0126 16:36:39.700993 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kzwjw" podStartSLOduration=3.282591923 podStartE2EDuration="5.700971282s" podCreationTimestamp="2026-01-26 16:36:34 +0000 UTC" firstStartedPulling="2026-01-26 16:36:36.633211977 +0000 UTC m=+3771.770229212" lastFinishedPulling="2026-01-26 16:36:39.051591336 +0000 UTC m=+3774.188608571" observedRunningTime="2026-01-26 16:36:39.696352571 +0000 UTC m=+3774.833369826" watchObservedRunningTime="2026-01-26 16:36:39.700971282 +0000 UTC m=+3774.837988517" Jan 26 16:36:40 crc kubenswrapper[4713]: I0126 16:36:40.138980 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:40 crc kubenswrapper[4713]: I0126 16:36:40.139326 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:40 crc kubenswrapper[4713]: I0126 16:36:40.181930 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:40 crc kubenswrapper[4713]: I0126 16:36:40.731688 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:42 crc kubenswrapper[4713]: I0126 16:36:42.170995 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6g5mq"] Jan 26 16:36:42 crc kubenswrapper[4713]: I0126 16:36:42.701577 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6g5mq" podUID="ec1684b5-ae4a-400b-95b4-37a555921c04" containerName="registry-server" containerID="cri-o://96364f3dcec0164acd404c4647a2f02fcd13097c1379e5a3e9df00886c8bc452" gracePeriod=2 Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.367273 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.425293 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4jdr\" (UniqueName: \"kubernetes.io/projected/ec1684b5-ae4a-400b-95b4-37a555921c04-kube-api-access-s4jdr\") pod \"ec1684b5-ae4a-400b-95b4-37a555921c04\" (UID: \"ec1684b5-ae4a-400b-95b4-37a555921c04\") " Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.425396 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec1684b5-ae4a-400b-95b4-37a555921c04-catalog-content\") pod \"ec1684b5-ae4a-400b-95b4-37a555921c04\" (UID: \"ec1684b5-ae4a-400b-95b4-37a555921c04\") " Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.425445 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec1684b5-ae4a-400b-95b4-37a555921c04-utilities\") pod \"ec1684b5-ae4a-400b-95b4-37a555921c04\" (UID: \"ec1684b5-ae4a-400b-95b4-37a555921c04\") " Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.426694 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec1684b5-ae4a-400b-95b4-37a555921c04-utilities" (OuterVolumeSpecName: "utilities") pod "ec1684b5-ae4a-400b-95b4-37a555921c04" (UID: "ec1684b5-ae4a-400b-95b4-37a555921c04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.442553 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec1684b5-ae4a-400b-95b4-37a555921c04-kube-api-access-s4jdr" (OuterVolumeSpecName: "kube-api-access-s4jdr") pod "ec1684b5-ae4a-400b-95b4-37a555921c04" (UID: "ec1684b5-ae4a-400b-95b4-37a555921c04"). InnerVolumeSpecName "kube-api-access-s4jdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.458570 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec1684b5-ae4a-400b-95b4-37a555921c04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec1684b5-ae4a-400b-95b4-37a555921c04" (UID: "ec1684b5-ae4a-400b-95b4-37a555921c04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.527487 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec1684b5-ae4a-400b-95b4-37a555921c04-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.527516 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4jdr\" (UniqueName: \"kubernetes.io/projected/ec1684b5-ae4a-400b-95b4-37a555921c04-kube-api-access-s4jdr\") on node \"crc\" DevicePath \"\"" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.527525 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec1684b5-ae4a-400b-95b4-37a555921c04-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.713061 4713 generic.go:334] "Generic (PLEG): container finished" podID="ec1684b5-ae4a-400b-95b4-37a555921c04" containerID="96364f3dcec0164acd404c4647a2f02fcd13097c1379e5a3e9df00886c8bc452" exitCode=0 Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.713127 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6g5mq" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.713162 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6g5mq" event={"ID":"ec1684b5-ae4a-400b-95b4-37a555921c04","Type":"ContainerDied","Data":"96364f3dcec0164acd404c4647a2f02fcd13097c1379e5a3e9df00886c8bc452"} Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.713777 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6g5mq" event={"ID":"ec1684b5-ae4a-400b-95b4-37a555921c04","Type":"ContainerDied","Data":"75631f6de29fe3f7a0d1bcff05bcd1a2c0ba1f65beb651de1ac8c0536826a9c0"} Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.713809 4713 scope.go:117] "RemoveContainer" containerID="96364f3dcec0164acd404c4647a2f02fcd13097c1379e5a3e9df00886c8bc452" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.734560 4713 scope.go:117] "RemoveContainer" containerID="ad275042344369b3f11d068a80a1873944fdc1097e961f4f6d00416402486db5" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.760175 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6g5mq"] Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.762303 4713 scope.go:117] "RemoveContainer" containerID="8cafb09589f798e2e792d53d76953b959cfde460eb98af07778be06975c00696" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.770375 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6g5mq"] Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.804382 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:36:43 crc kubenswrapper[4713]: E0126 16:36:43.804668 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.818598 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec1684b5-ae4a-400b-95b4-37a555921c04" path="/var/lib/kubelet/pods/ec1684b5-ae4a-400b-95b4-37a555921c04/volumes" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.826441 4713 scope.go:117] "RemoveContainer" containerID="96364f3dcec0164acd404c4647a2f02fcd13097c1379e5a3e9df00886c8bc452" Jan 26 16:36:43 crc kubenswrapper[4713]: E0126 16:36:43.826840 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96364f3dcec0164acd404c4647a2f02fcd13097c1379e5a3e9df00886c8bc452\": container with ID starting with 96364f3dcec0164acd404c4647a2f02fcd13097c1379e5a3e9df00886c8bc452 not found: ID does not exist" containerID="96364f3dcec0164acd404c4647a2f02fcd13097c1379e5a3e9df00886c8bc452" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.826879 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96364f3dcec0164acd404c4647a2f02fcd13097c1379e5a3e9df00886c8bc452"} err="failed to get container status \"96364f3dcec0164acd404c4647a2f02fcd13097c1379e5a3e9df00886c8bc452\": rpc error: code = NotFound desc = could not find container \"96364f3dcec0164acd404c4647a2f02fcd13097c1379e5a3e9df00886c8bc452\": container with ID starting with 96364f3dcec0164acd404c4647a2f02fcd13097c1379e5a3e9df00886c8bc452 not found: ID does not exist" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.826903 4713 scope.go:117] "RemoveContainer" containerID="ad275042344369b3f11d068a80a1873944fdc1097e961f4f6d00416402486db5" Jan 26 16:36:43 crc kubenswrapper[4713]: E0126 16:36:43.828601 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad275042344369b3f11d068a80a1873944fdc1097e961f4f6d00416402486db5\": container with ID starting with ad275042344369b3f11d068a80a1873944fdc1097e961f4f6d00416402486db5 not found: ID does not exist" containerID="ad275042344369b3f11d068a80a1873944fdc1097e961f4f6d00416402486db5" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.828632 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad275042344369b3f11d068a80a1873944fdc1097e961f4f6d00416402486db5"} err="failed to get container status \"ad275042344369b3f11d068a80a1873944fdc1097e961f4f6d00416402486db5\": rpc error: code = NotFound desc = could not find container \"ad275042344369b3f11d068a80a1873944fdc1097e961f4f6d00416402486db5\": container with ID starting with ad275042344369b3f11d068a80a1873944fdc1097e961f4f6d00416402486db5 not found: ID does not exist" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.828651 4713 scope.go:117] "RemoveContainer" containerID="8cafb09589f798e2e792d53d76953b959cfde460eb98af07778be06975c00696" Jan 26 16:36:43 crc kubenswrapper[4713]: E0126 16:36:43.829096 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cafb09589f798e2e792d53d76953b959cfde460eb98af07778be06975c00696\": container with ID starting with 8cafb09589f798e2e792d53d76953b959cfde460eb98af07778be06975c00696 not found: ID does not exist" containerID="8cafb09589f798e2e792d53d76953b959cfde460eb98af07778be06975c00696" Jan 26 16:36:43 crc kubenswrapper[4713]: I0126 16:36:43.829124 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cafb09589f798e2e792d53d76953b959cfde460eb98af07778be06975c00696"} err="failed to get container status \"8cafb09589f798e2e792d53d76953b959cfde460eb98af07778be06975c00696\": rpc error: code = NotFound desc = could not find container \"8cafb09589f798e2e792d53d76953b959cfde460eb98af07778be06975c00696\": container with ID starting with 8cafb09589f798e2e792d53d76953b959cfde460eb98af07778be06975c00696 not found: ID does not exist" Jan 26 16:36:45 crc kubenswrapper[4713]: I0126 16:36:45.134851 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:45 crc kubenswrapper[4713]: I0126 16:36:45.135231 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:45 crc kubenswrapper[4713]: I0126 16:36:45.188228 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:45 crc kubenswrapper[4713]: I0126 16:36:45.924919 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:46 crc kubenswrapper[4713]: I0126 16:36:46.575066 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kzwjw"] Jan 26 16:36:47 crc kubenswrapper[4713]: I0126 16:36:47.750749 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kzwjw" podUID="9d4b2502-f694-41a5-b1af-bb02c6110d1a" containerName="registry-server" containerID="cri-o://38434264faa68d492db3761cdc640cc74ba2e557cb0ed5f6a5d8d0fd25a9ae16" gracePeriod=2 Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.777963 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.826694 4713 generic.go:334] "Generic (PLEG): container finished" podID="9d4b2502-f694-41a5-b1af-bb02c6110d1a" containerID="38434264faa68d492db3761cdc640cc74ba2e557cb0ed5f6a5d8d0fd25a9ae16" exitCode=0 Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.826949 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzwjw" event={"ID":"9d4b2502-f694-41a5-b1af-bb02c6110d1a","Type":"ContainerDied","Data":"38434264faa68d492db3761cdc640cc74ba2e557cb0ed5f6a5d8d0fd25a9ae16"} Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.827027 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzwjw" event={"ID":"9d4b2502-f694-41a5-b1af-bb02c6110d1a","Type":"ContainerDied","Data":"8b36b20830d480dd3c5d6afad0f0d5eb5607856d0f61a1bee156b182c214cbc8"} Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.827093 4713 scope.go:117] "RemoveContainer" containerID="38434264faa68d492db3761cdc640cc74ba2e557cb0ed5f6a5d8d0fd25a9ae16" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.827324 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzwjw" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.831150 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-72ppc_e4913aa4-c0fe-4d3d-a5c3-64efb5c40291/cert-manager-controller/0.log" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.846120 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwgbh\" (UniqueName: \"kubernetes.io/projected/9d4b2502-f694-41a5-b1af-bb02c6110d1a-kube-api-access-xwgbh\") pod \"9d4b2502-f694-41a5-b1af-bb02c6110d1a\" (UID: \"9d4b2502-f694-41a5-b1af-bb02c6110d1a\") " Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.846422 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4b2502-f694-41a5-b1af-bb02c6110d1a-catalog-content\") pod \"9d4b2502-f694-41a5-b1af-bb02c6110d1a\" (UID: \"9d4b2502-f694-41a5-b1af-bb02c6110d1a\") " Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.846573 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4b2502-f694-41a5-b1af-bb02c6110d1a-utilities\") pod \"9d4b2502-f694-41a5-b1af-bb02c6110d1a\" (UID: \"9d4b2502-f694-41a5-b1af-bb02c6110d1a\") " Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.849331 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d4b2502-f694-41a5-b1af-bb02c6110d1a-utilities" (OuterVolumeSpecName: "utilities") pod "9d4b2502-f694-41a5-b1af-bb02c6110d1a" (UID: "9d4b2502-f694-41a5-b1af-bb02c6110d1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.860868 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4b2502-f694-41a5-b1af-bb02c6110d1a-kube-api-access-xwgbh" (OuterVolumeSpecName: "kube-api-access-xwgbh") pod "9d4b2502-f694-41a5-b1af-bb02c6110d1a" (UID: "9d4b2502-f694-41a5-b1af-bb02c6110d1a"). InnerVolumeSpecName "kube-api-access-xwgbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.890526 4713 scope.go:117] "RemoveContainer" containerID="edcb6b5c8dfedb86345a9d0c06d40109df802ab9810b00ccf7de9b6c3e01c5b2" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.931293 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d4b2502-f694-41a5-b1af-bb02c6110d1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d4b2502-f694-41a5-b1af-bb02c6110d1a" (UID: "9d4b2502-f694-41a5-b1af-bb02c6110d1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.934472 4713 scope.go:117] "RemoveContainer" containerID="7b446609ee88e90b65d05f3c179615f6fab18b2870007f2b51ccd3b15796eb8f" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.949872 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwgbh\" (UniqueName: \"kubernetes.io/projected/9d4b2502-f694-41a5-b1af-bb02c6110d1a-kube-api-access-xwgbh\") on node \"crc\" DevicePath \"\"" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.950104 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4b2502-f694-41a5-b1af-bb02c6110d1a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.950175 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4b2502-f694-41a5-b1af-bb02c6110d1a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.976998 4713 scope.go:117] "RemoveContainer" containerID="38434264faa68d492db3761cdc640cc74ba2e557cb0ed5f6a5d8d0fd25a9ae16" Jan 26 16:36:48 crc kubenswrapper[4713]: E0126 16:36:48.978406 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38434264faa68d492db3761cdc640cc74ba2e557cb0ed5f6a5d8d0fd25a9ae16\": container with ID starting with 38434264faa68d492db3761cdc640cc74ba2e557cb0ed5f6a5d8d0fd25a9ae16 not found: ID does not exist" containerID="38434264faa68d492db3761cdc640cc74ba2e557cb0ed5f6a5d8d0fd25a9ae16" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.978448 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38434264faa68d492db3761cdc640cc74ba2e557cb0ed5f6a5d8d0fd25a9ae16"} err="failed to get container status \"38434264faa68d492db3761cdc640cc74ba2e557cb0ed5f6a5d8d0fd25a9ae16\": rpc error: code = NotFound desc = could not find container \"38434264faa68d492db3761cdc640cc74ba2e557cb0ed5f6a5d8d0fd25a9ae16\": container with ID starting with 38434264faa68d492db3761cdc640cc74ba2e557cb0ed5f6a5d8d0fd25a9ae16 not found: ID does not exist" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.978475 4713 scope.go:117] "RemoveContainer" containerID="edcb6b5c8dfedb86345a9d0c06d40109df802ab9810b00ccf7de9b6c3e01c5b2" Jan 26 16:36:48 crc kubenswrapper[4713]: E0126 16:36:48.978899 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edcb6b5c8dfedb86345a9d0c06d40109df802ab9810b00ccf7de9b6c3e01c5b2\": container with ID starting with edcb6b5c8dfedb86345a9d0c06d40109df802ab9810b00ccf7de9b6c3e01c5b2 not found: ID does not exist" containerID="edcb6b5c8dfedb86345a9d0c06d40109df802ab9810b00ccf7de9b6c3e01c5b2" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.979048 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edcb6b5c8dfedb86345a9d0c06d40109df802ab9810b00ccf7de9b6c3e01c5b2"} err="failed to get container status \"edcb6b5c8dfedb86345a9d0c06d40109df802ab9810b00ccf7de9b6c3e01c5b2\": rpc error: code = NotFound desc = could not find container \"edcb6b5c8dfedb86345a9d0c06d40109df802ab9810b00ccf7de9b6c3e01c5b2\": container with ID starting with edcb6b5c8dfedb86345a9d0c06d40109df802ab9810b00ccf7de9b6c3e01c5b2 not found: ID does not exist" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.979158 4713 scope.go:117] "RemoveContainer" containerID="7b446609ee88e90b65d05f3c179615f6fab18b2870007f2b51ccd3b15796eb8f" Jan 26 16:36:48 crc kubenswrapper[4713]: E0126 16:36:48.980756 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b446609ee88e90b65d05f3c179615f6fab18b2870007f2b51ccd3b15796eb8f\": container with ID starting with 7b446609ee88e90b65d05f3c179615f6fab18b2870007f2b51ccd3b15796eb8f not found: ID does not exist" containerID="7b446609ee88e90b65d05f3c179615f6fab18b2870007f2b51ccd3b15796eb8f" Jan 26 16:36:48 crc kubenswrapper[4713]: I0126 16:36:48.980876 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b446609ee88e90b65d05f3c179615f6fab18b2870007f2b51ccd3b15796eb8f"} err="failed to get container status \"7b446609ee88e90b65d05f3c179615f6fab18b2870007f2b51ccd3b15796eb8f\": rpc error: code = NotFound desc = could not find container \"7b446609ee88e90b65d05f3c179615f6fab18b2870007f2b51ccd3b15796eb8f\": container with ID starting with 7b446609ee88e90b65d05f3c179615f6fab18b2870007f2b51ccd3b15796eb8f not found: ID does not exist" Jan 26 16:36:49 crc kubenswrapper[4713]: I0126 16:36:49.024138 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-l5zh6_e155e55f-092c-426f-9667-fa1bf707ee5b/cert-manager-cainjector/0.log" Jan 26 16:36:49 crc kubenswrapper[4713]: I0126 16:36:49.114065 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-xdxtz_27379ef9-6846-4d31-a33b-f1c6baaac6b3/cert-manager-webhook/0.log" Jan 26 16:36:49 crc kubenswrapper[4713]: I0126 16:36:49.163401 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kzwjw"] Jan 26 16:36:49 crc kubenswrapper[4713]: I0126 16:36:49.175480 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kzwjw"] Jan 26 16:36:49 crc kubenswrapper[4713]: I0126 16:36:49.814721 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4b2502-f694-41a5-b1af-bb02c6110d1a" path="/var/lib/kubelet/pods/9d4b2502-f694-41a5-b1af-bb02c6110d1a/volumes" Jan 26 16:36:54 crc kubenswrapper[4713]: I0126 16:36:54.804330 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:36:54 crc kubenswrapper[4713]: E0126 16:36:54.805069 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:37:03 crc kubenswrapper[4713]: I0126 16:37:03.753013 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7bzvm_a6fe59c9-c3b5-407e-9d75-9e7f98d4142d/nmstate-console-plugin/0.log" Jan 26 16:37:03 crc kubenswrapper[4713]: I0126 16:37:03.959170 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-k8t4l_ad3354c5-b1d1-4473-99f1-0b1a9a4ded20/nmstate-handler/0.log" Jan 26 16:37:03 crc kubenswrapper[4713]: I0126 16:37:03.963272 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-52jc4_e4ccf912-a778-4d91-84d9-bbfb4f83c221/kube-rbac-proxy/0.log" Jan 26 16:37:04 crc kubenswrapper[4713]: I0126 16:37:04.118286 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-52jc4_e4ccf912-a778-4d91-84d9-bbfb4f83c221/nmstate-metrics/0.log" Jan 26 16:37:04 crc kubenswrapper[4713]: I0126 16:37:04.180380 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-s598k_67b6fbcb-7c02-4dd2-9da0-b5d2fb39e94c/nmstate-operator/0.log" Jan 26 16:37:04 crc kubenswrapper[4713]: I0126 16:37:04.455269 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-t929j_8f508dd8-5689-4e70-b252-5a4e6204bd4b/nmstate-webhook/0.log" Jan 26 16:37:07 crc kubenswrapper[4713]: I0126 16:37:07.803755 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:37:07 crc kubenswrapper[4713]: E0126 16:37:07.804371 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:37:19 crc kubenswrapper[4713]: I0126 16:37:19.670703 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-685487c794-sjbsp_e2e9f61c-c80e-443b-9175-15f2dcfaba60/kube-rbac-proxy/0.log" Jan 26 16:37:19 crc kubenswrapper[4713]: I0126 16:37:19.782448 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-685487c794-sjbsp_e2e9f61c-c80e-443b-9175-15f2dcfaba60/manager/0.log" Jan 26 16:37:19 crc kubenswrapper[4713]: I0126 16:37:19.804841 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:37:19 crc kubenswrapper[4713]: E0126 16:37:19.805063 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:37:33 crc kubenswrapper[4713]: I0126 16:37:33.803601 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:37:33 crc kubenswrapper[4713]: E0126 16:37:33.804490 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:37:34 crc kubenswrapper[4713]: I0126 16:37:34.868630 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-rmjvp_913497e5-68bd-48dd-aed5-babd17f47f0e/prometheus-operator/0.log" Jan 26 16:37:35 crc kubenswrapper[4713]: I0126 16:37:35.003803 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds_58c7e269-8e8b-4ee4-a57e-ab4218256bbb/prometheus-operator-admission-webhook/0.log" Jan 26 16:37:35 crc kubenswrapper[4713]: I0126 16:37:35.195800 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4_72150c7a-70d1-4f39-9649-840dbf9571d2/prometheus-operator-admission-webhook/0.log" Jan 26 16:37:35 crc kubenswrapper[4713]: I0126 16:37:35.363702 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-l79jc_9b4ece96-60c6-4974-af3e-6a61eebaf729/operator/0.log" Jan 26 16:37:35 crc kubenswrapper[4713]: I0126 16:37:35.491636 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-77g4l_ebe35fcf-702c-42da-8eba-33bb585c50db/perses-operator/0.log" Jan 26 16:37:44 crc kubenswrapper[4713]: I0126 16:37:44.808082 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:37:44 crc kubenswrapper[4713]: E0126 16:37:44.809193 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:37:52 crc kubenswrapper[4713]: I0126 16:37:52.260485 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xqkgk_23261bd8-2fa1-4f97-851f-85aff45181b8/kube-rbac-proxy/0.log" Jan 26 16:37:52 crc kubenswrapper[4713]: I0126 16:37:52.279005 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xqkgk_23261bd8-2fa1-4f97-851f-85aff45181b8/controller/0.log" Jan 26 16:37:52 crc kubenswrapper[4713]: I0126 16:37:52.494382 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/cp-frr-files/0.log" Jan 26 16:37:52 crc kubenswrapper[4713]: I0126 16:37:52.656204 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/cp-frr-files/0.log" Jan 26 16:37:52 crc kubenswrapper[4713]: I0126 16:37:52.664703 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/cp-reloader/0.log" Jan 26 16:37:52 crc kubenswrapper[4713]: I0126 16:37:52.689505 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/cp-metrics/0.log" Jan 26 16:37:52 crc kubenswrapper[4713]: I0126 16:37:52.719538 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/cp-reloader/0.log" Jan 26 16:37:52 crc kubenswrapper[4713]: I0126 16:37:52.941027 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/cp-reloader/0.log" Jan 26 16:37:52 crc kubenswrapper[4713]: I0126 16:37:52.953740 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/cp-frr-files/0.log" Jan 26 16:37:53 crc kubenswrapper[4713]: I0126 16:37:53.043690 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/cp-metrics/0.log" Jan 26 16:37:53 crc kubenswrapper[4713]: I0126 16:37:53.050655 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/cp-metrics/0.log" Jan 26 16:37:53 crc kubenswrapper[4713]: I0126 16:37:53.241192 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/cp-frr-files/0.log" Jan 26 16:37:53 crc kubenswrapper[4713]: I0126 16:37:53.317838 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/cp-metrics/0.log" Jan 26 16:37:53 crc kubenswrapper[4713]: I0126 16:37:53.352699 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/cp-reloader/0.log" Jan 26 16:37:53 crc kubenswrapper[4713]: I0126 16:37:53.371374 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/controller/0.log" Jan 26 16:37:53 crc kubenswrapper[4713]: I0126 16:37:53.657226 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/kube-rbac-proxy-frr/0.log" Jan 26 16:37:53 crc kubenswrapper[4713]: I0126 16:37:53.663818 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/frr-metrics/0.log" Jan 26 16:37:53 crc kubenswrapper[4713]: I0126 16:37:53.765661 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/kube-rbac-proxy/0.log" Jan 26 16:37:53 crc kubenswrapper[4713]: I0126 16:37:53.854048 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/reloader/0.log" Jan 26 16:37:54 crc kubenswrapper[4713]: I0126 16:37:54.024681 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-t4l45_43b745e9-8cc0-4186-bf90-355ce248ab27/frr-k8s-webhook-server/0.log" Jan 26 16:37:54 crc kubenswrapper[4713]: I0126 16:37:54.233470 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-778b445bd5-8bzgb_351bcd96-d2cb-4d74-8794-69a879f52c35/manager/0.log" Jan 26 16:37:54 crc kubenswrapper[4713]: I0126 16:37:54.419698 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6c895c556d-p2djc_b6ba8417-c80d-4ef5-b5d9-d93ce9c6c428/webhook-server/0.log" Jan 26 16:37:54 crc kubenswrapper[4713]: I0126 16:37:54.565956 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nwj9r_5d4dd3fb-43d0-46a8-9a41-1122358e82ce/kube-rbac-proxy/0.log" Jan 26 16:37:54 crc kubenswrapper[4713]: I0126 16:37:54.738123 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-66jj8_434ec099-efe3-4f0e-812c-2b684c7f8274/frr/0.log" Jan 26 16:37:55 crc kubenswrapper[4713]: I0126 16:37:55.085533 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nwj9r_5d4dd3fb-43d0-46a8-9a41-1122358e82ce/speaker/0.log" Jan 26 16:37:58 crc kubenswrapper[4713]: I0126 16:37:58.804553 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:37:58 crc kubenswrapper[4713]: E0126 16:37:58.805225 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:38:08 crc kubenswrapper[4713]: I0126 16:38:08.996122 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp_83046eff-95ef-45f2-bdfa-24e38df1cfb0/util/0.log" Jan 26 16:38:09 crc kubenswrapper[4713]: I0126 16:38:09.150819 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp_83046eff-95ef-45f2-bdfa-24e38df1cfb0/util/0.log" Jan 26 16:38:09 crc kubenswrapper[4713]: I0126 16:38:09.194490 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp_83046eff-95ef-45f2-bdfa-24e38df1cfb0/pull/0.log" Jan 26 16:38:09 crc kubenswrapper[4713]: I0126 16:38:09.204987 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp_83046eff-95ef-45f2-bdfa-24e38df1cfb0/pull/0.log" Jan 26 16:38:09 crc kubenswrapper[4713]: I0126 16:38:09.395559 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp_83046eff-95ef-45f2-bdfa-24e38df1cfb0/pull/0.log" Jan 26 16:38:09 crc kubenswrapper[4713]: I0126 16:38:09.398085 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp_83046eff-95ef-45f2-bdfa-24e38df1cfb0/util/0.log" Jan 26 16:38:09 crc kubenswrapper[4713]: I0126 16:38:09.428441 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc55rvp_83046eff-95ef-45f2-bdfa-24e38df1cfb0/extract/0.log" Jan 26 16:38:09 crc kubenswrapper[4713]: I0126 16:38:09.539633 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p_4ffd789b-98cd-4fd1-a531-95d329e68c9b/util/0.log" Jan 26 16:38:09 crc kubenswrapper[4713]: I0126 16:38:09.751520 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p_4ffd789b-98cd-4fd1-a531-95d329e68c9b/pull/0.log" Jan 26 16:38:09 crc kubenswrapper[4713]: I0126 16:38:09.779284 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p_4ffd789b-98cd-4fd1-a531-95d329e68c9b/pull/0.log" Jan 26 16:38:09 crc kubenswrapper[4713]: I0126 16:38:09.799075 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p_4ffd789b-98cd-4fd1-a531-95d329e68c9b/util/0.log" Jan 26 16:38:10 crc kubenswrapper[4713]: I0126 16:38:10.076173 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p_4ffd789b-98cd-4fd1-a531-95d329e68c9b/extract/0.log" Jan 26 16:38:10 crc kubenswrapper[4713]: I0126 16:38:10.079593 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p_4ffd789b-98cd-4fd1-a531-95d329e68c9b/util/0.log" Jan 26 16:38:10 crc kubenswrapper[4713]: I0126 16:38:10.129783 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7736bx9p_4ffd789b-98cd-4fd1-a531-95d329e68c9b/pull/0.log" Jan 26 16:38:10 crc kubenswrapper[4713]: I0126 16:38:10.273274 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl_9323c729-cd29-40a2-9ed3-49844ca9e66c/util/0.log" Jan 26 16:38:10 crc kubenswrapper[4713]: I0126 16:38:10.449618 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl_9323c729-cd29-40a2-9ed3-49844ca9e66c/util/0.log" Jan 26 16:38:10 crc kubenswrapper[4713]: I0126 16:38:10.475686 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl_9323c729-cd29-40a2-9ed3-49844ca9e66c/pull/0.log" Jan 26 16:38:10 crc kubenswrapper[4713]: I0126 16:38:10.525172 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl_9323c729-cd29-40a2-9ed3-49844ca9e66c/pull/0.log" Jan 26 16:38:10 crc kubenswrapper[4713]: I0126 16:38:10.890663 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl_9323c729-cd29-40a2-9ed3-49844ca9e66c/pull/0.log" Jan 26 16:38:10 crc kubenswrapper[4713]: I0126 16:38:10.919284 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl_9323c729-cd29-40a2-9ed3-49844ca9e66c/util/0.log" Jan 26 16:38:10 crc kubenswrapper[4713]: I0126 16:38:10.950973 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q7pl_9323c729-cd29-40a2-9ed3-49844ca9e66c/extract/0.log" Jan 26 16:38:11 crc kubenswrapper[4713]: I0126 16:38:11.126423 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm_4f30d0d9-a953-4da6-be6b-32fc986c16ae/util/0.log" Jan 26 16:38:11 crc kubenswrapper[4713]: I0126 16:38:11.322529 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm_4f30d0d9-a953-4da6-be6b-32fc986c16ae/pull/0.log" Jan 26 16:38:11 crc kubenswrapper[4713]: I0126 16:38:11.340161 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm_4f30d0d9-a953-4da6-be6b-32fc986c16ae/pull/0.log" Jan 26 16:38:11 crc kubenswrapper[4713]: I0126 16:38:11.351255 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm_4f30d0d9-a953-4da6-be6b-32fc986c16ae/util/0.log" Jan 26 16:38:11 crc kubenswrapper[4713]: I0126 16:38:11.517530 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm_4f30d0d9-a953-4da6-be6b-32fc986c16ae/util/0.log" Jan 26 16:38:11 crc kubenswrapper[4713]: I0126 16:38:11.554985 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm_4f30d0d9-a953-4da6-be6b-32fc986c16ae/extract/0.log" Jan 26 16:38:11 crc kubenswrapper[4713]: I0126 16:38:11.577057 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086lnmm_4f30d0d9-a953-4da6-be6b-32fc986c16ae/pull/0.log" Jan 26 16:38:11 crc kubenswrapper[4713]: I0126 16:38:11.732037 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kkffc_a996c191-52e4-490d-a15a-9def9a651be5/extract-utilities/0.log" Jan 26 16:38:11 crc kubenswrapper[4713]: I0126 16:38:11.803520 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:38:11 crc kubenswrapper[4713]: E0126 16:38:11.803978 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:38:11 crc kubenswrapper[4713]: I0126 16:38:11.909242 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kkffc_a996c191-52e4-490d-a15a-9def9a651be5/extract-content/0.log" Jan 26 16:38:11 crc kubenswrapper[4713]: I0126 16:38:11.912487 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kkffc_a996c191-52e4-490d-a15a-9def9a651be5/extract-utilities/0.log" Jan 26 16:38:11 crc kubenswrapper[4713]: I0126 16:38:11.913566 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kkffc_a996c191-52e4-490d-a15a-9def9a651be5/extract-content/0.log" Jan 26 16:38:12 crc kubenswrapper[4713]: I0126 16:38:12.167638 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kkffc_a996c191-52e4-490d-a15a-9def9a651be5/extract-utilities/0.log" Jan 26 16:38:12 crc kubenswrapper[4713]: I0126 16:38:12.188198 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kkffc_a996c191-52e4-490d-a15a-9def9a651be5/extract-content/0.log" Jan 26 16:38:12 crc kubenswrapper[4713]: I0126 16:38:12.388981 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jc8sm_b17b3d7f-6672-4596-ad0c-39a9bfac5792/extract-utilities/0.log" Jan 26 16:38:12 crc kubenswrapper[4713]: I0126 16:38:12.629846 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kkffc_a996c191-52e4-490d-a15a-9def9a651be5/registry-server/0.log" Jan 26 16:38:12 crc kubenswrapper[4713]: I0126 16:38:12.692293 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jc8sm_b17b3d7f-6672-4596-ad0c-39a9bfac5792/extract-content/0.log" Jan 26 16:38:12 crc kubenswrapper[4713]: I0126 16:38:12.693088 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jc8sm_b17b3d7f-6672-4596-ad0c-39a9bfac5792/extract-utilities/0.log" Jan 26 16:38:12 crc kubenswrapper[4713]: I0126 16:38:12.697128 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jc8sm_b17b3d7f-6672-4596-ad0c-39a9bfac5792/extract-content/0.log" Jan 26 16:38:12 crc kubenswrapper[4713]: I0126 16:38:12.858089 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jc8sm_b17b3d7f-6672-4596-ad0c-39a9bfac5792/extract-utilities/0.log" Jan 26 16:38:12 crc kubenswrapper[4713]: I0126 16:38:12.887143 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jc8sm_b17b3d7f-6672-4596-ad0c-39a9bfac5792/extract-content/0.log" Jan 26 16:38:13 crc kubenswrapper[4713]: I0126 16:38:13.135670 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4q88z_3d260b45-a0d0-4b98-9f8f-96d788e6d145/marketplace-operator/0.log" Jan 26 16:38:13 crc kubenswrapper[4713]: I0126 16:38:13.236295 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dwmf8_a1a9bc74-ffa8-4646-be3e-09cee80a5d04/extract-utilities/0.log" Jan 26 16:38:13 crc kubenswrapper[4713]: I0126 16:38:13.390530 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jc8sm_b17b3d7f-6672-4596-ad0c-39a9bfac5792/registry-server/0.log" Jan 26 16:38:13 crc kubenswrapper[4713]: I0126 16:38:13.556608 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dwmf8_a1a9bc74-ffa8-4646-be3e-09cee80a5d04/extract-content/0.log" Jan 26 16:38:13 crc kubenswrapper[4713]: I0126 16:38:13.558570 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dwmf8_a1a9bc74-ffa8-4646-be3e-09cee80a5d04/extract-utilities/0.log" Jan 26 16:38:13 crc kubenswrapper[4713]: I0126 16:38:13.583452 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dwmf8_a1a9bc74-ffa8-4646-be3e-09cee80a5d04/extract-content/0.log" Jan 26 16:38:13 crc kubenswrapper[4713]: I0126 16:38:13.831369 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dwmf8_a1a9bc74-ffa8-4646-be3e-09cee80a5d04/extract-content/0.log" Jan 26 16:38:13 crc kubenswrapper[4713]: I0126 16:38:13.860713 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4jw55_3da35423-6430-4e55-83aa-8a99fe5bdf2d/extract-utilities/0.log" Jan 26 16:38:13 crc kubenswrapper[4713]: I0126 16:38:13.885937 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dwmf8_a1a9bc74-ffa8-4646-be3e-09cee80a5d04/extract-utilities/0.log" Jan 26 16:38:13 crc kubenswrapper[4713]: I0126 16:38:13.905285 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dwmf8_a1a9bc74-ffa8-4646-be3e-09cee80a5d04/registry-server/0.log" Jan 26 16:38:14 crc kubenswrapper[4713]: I0126 16:38:14.316926 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4jw55_3da35423-6430-4e55-83aa-8a99fe5bdf2d/extract-utilities/0.log" Jan 26 16:38:14 crc kubenswrapper[4713]: I0126 16:38:14.341882 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4jw55_3da35423-6430-4e55-83aa-8a99fe5bdf2d/extract-content/0.log" Jan 26 16:38:14 crc kubenswrapper[4713]: I0126 16:38:14.345706 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4jw55_3da35423-6430-4e55-83aa-8a99fe5bdf2d/extract-content/0.log" Jan 26 16:38:14 crc kubenswrapper[4713]: I0126 16:38:14.635874 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4jw55_3da35423-6430-4e55-83aa-8a99fe5bdf2d/extract-utilities/0.log" Jan 26 16:38:14 crc kubenswrapper[4713]: I0126 16:38:14.664548 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4jw55_3da35423-6430-4e55-83aa-8a99fe5bdf2d/extract-content/0.log" Jan 26 16:38:15 crc kubenswrapper[4713]: I0126 16:38:15.237101 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4jw55_3da35423-6430-4e55-83aa-8a99fe5bdf2d/registry-server/0.log" Jan 26 16:38:26 crc kubenswrapper[4713]: I0126 16:38:26.804833 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:38:26 crc kubenswrapper[4713]: E0126 16:38:26.805515 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:38:30 crc kubenswrapper[4713]: I0126 16:38:30.374188 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-rmjvp_913497e5-68bd-48dd-aed5-babd17f47f0e/prometheus-operator/0.log" Jan 26 16:38:30 crc kubenswrapper[4713]: I0126 16:38:30.403895 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7fdb868589-4b2ds_58c7e269-8e8b-4ee4-a57e-ab4218256bbb/prometheus-operator-admission-webhook/0.log" Jan 26 16:38:30 crc kubenswrapper[4713]: I0126 16:38:30.473676 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7fdb868589-qgzx4_72150c7a-70d1-4f39-9649-840dbf9571d2/prometheus-operator-admission-webhook/0.log" Jan 26 16:38:30 crc kubenswrapper[4713]: I0126 16:38:30.636877 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-l79jc_9b4ece96-60c6-4974-af3e-6a61eebaf729/operator/0.log" Jan 26 16:38:30 crc kubenswrapper[4713]: I0126 16:38:30.639309 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-77g4l_ebe35fcf-702c-42da-8eba-33bb585c50db/perses-operator/0.log" Jan 26 16:38:41 crc kubenswrapper[4713]: I0126 16:38:41.804551 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:38:41 crc kubenswrapper[4713]: E0126 16:38:41.805308 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:38:46 crc kubenswrapper[4713]: I0126 16:38:46.118594 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-685487c794-sjbsp_e2e9f61c-c80e-443b-9175-15f2dcfaba60/kube-rbac-proxy/0.log" Jan 26 16:38:46 crc kubenswrapper[4713]: I0126 16:38:46.174220 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-685487c794-sjbsp_e2e9f61c-c80e-443b-9175-15f2dcfaba60/manager/0.log" Jan 26 16:38:55 crc kubenswrapper[4713]: I0126 16:38:55.810511 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:38:55 crc kubenswrapper[4713]: E0126 16:38:55.811422 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:39:08 crc kubenswrapper[4713]: I0126 16:39:08.803422 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:39:08 crc kubenswrapper[4713]: E0126 16:39:08.804091 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:39:20 crc kubenswrapper[4713]: I0126 16:39:20.803403 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:39:20 crc kubenswrapper[4713]: E0126 16:39:20.805758 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:39:35 crc kubenswrapper[4713]: I0126 16:39:35.815700 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:39:35 crc kubenswrapper[4713]: E0126 16:39:35.816626 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:39:48 crc kubenswrapper[4713]: I0126 16:39:48.804731 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:39:48 crc kubenswrapper[4713]: E0126 16:39:48.806101 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:40:00 crc kubenswrapper[4713]: I0126 16:40:00.804319 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:40:00 crc kubenswrapper[4713]: E0126 16:40:00.805301 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:40:11 crc kubenswrapper[4713]: I0126 16:40:11.913875 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7z64h"] Jan 26 16:40:11 crc kubenswrapper[4713]: E0126 16:40:11.914689 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1684b5-ae4a-400b-95b4-37a555921c04" containerName="registry-server" Jan 26 16:40:11 crc kubenswrapper[4713]: I0126 16:40:11.914700 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1684b5-ae4a-400b-95b4-37a555921c04" containerName="registry-server" Jan 26 16:40:11 crc kubenswrapper[4713]: E0126 16:40:11.914724 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4b2502-f694-41a5-b1af-bb02c6110d1a" containerName="extract-content" Jan 26 16:40:11 crc kubenswrapper[4713]: I0126 16:40:11.914730 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4b2502-f694-41a5-b1af-bb02c6110d1a" containerName="extract-content" Jan 26 16:40:11 crc kubenswrapper[4713]: E0126 16:40:11.914740 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1684b5-ae4a-400b-95b4-37a555921c04" containerName="extract-content" Jan 26 16:40:11 crc kubenswrapper[4713]: I0126 16:40:11.914746 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1684b5-ae4a-400b-95b4-37a555921c04" containerName="extract-content" Jan 26 16:40:11 crc kubenswrapper[4713]: E0126 16:40:11.914755 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4b2502-f694-41a5-b1af-bb02c6110d1a" containerName="extract-utilities" Jan 26 16:40:11 crc kubenswrapper[4713]: I0126 16:40:11.914761 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4b2502-f694-41a5-b1af-bb02c6110d1a" containerName="extract-utilities" Jan 26 16:40:11 crc kubenswrapper[4713]: E0126 16:40:11.914769 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1684b5-ae4a-400b-95b4-37a555921c04" containerName="extract-utilities" Jan 26 16:40:11 crc kubenswrapper[4713]: I0126 16:40:11.914775 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1684b5-ae4a-400b-95b4-37a555921c04" containerName="extract-utilities" Jan 26 16:40:11 crc kubenswrapper[4713]: E0126 16:40:11.914793 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4b2502-f694-41a5-b1af-bb02c6110d1a" containerName="registry-server" Jan 26 16:40:11 crc kubenswrapper[4713]: I0126 16:40:11.914799 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4b2502-f694-41a5-b1af-bb02c6110d1a" containerName="registry-server" Jan 26 16:40:11 crc kubenswrapper[4713]: I0126 16:40:11.914991 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1684b5-ae4a-400b-95b4-37a555921c04" containerName="registry-server" Jan 26 16:40:11 crc kubenswrapper[4713]: I0126 16:40:11.915020 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d4b2502-f694-41a5-b1af-bb02c6110d1a" containerName="registry-server" Jan 26 16:40:11 crc kubenswrapper[4713]: I0126 16:40:11.916913 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:11 crc kubenswrapper[4713]: I0126 16:40:11.927461 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8908455-4d9d-4f3a-8637-242827d8dab9-utilities\") pod \"redhat-operators-7z64h\" (UID: \"f8908455-4d9d-4f3a-8637-242827d8dab9\") " pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:11 crc kubenswrapper[4713]: I0126 16:40:11.927560 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpn7c\" (UniqueName: \"kubernetes.io/projected/f8908455-4d9d-4f3a-8637-242827d8dab9-kube-api-access-qpn7c\") pod \"redhat-operators-7z64h\" (UID: \"f8908455-4d9d-4f3a-8637-242827d8dab9\") " pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:11 crc kubenswrapper[4713]: I0126 16:40:11.927582 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8908455-4d9d-4f3a-8637-242827d8dab9-catalog-content\") pod \"redhat-operators-7z64h\" (UID: \"f8908455-4d9d-4f3a-8637-242827d8dab9\") " pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:11 crc kubenswrapper[4713]: I0126 16:40:11.929929 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7z64h"] Jan 26 16:40:12 crc kubenswrapper[4713]: I0126 16:40:12.029662 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpn7c\" (UniqueName: \"kubernetes.io/projected/f8908455-4d9d-4f3a-8637-242827d8dab9-kube-api-access-qpn7c\") pod \"redhat-operators-7z64h\" (UID: \"f8908455-4d9d-4f3a-8637-242827d8dab9\") " pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:12 crc kubenswrapper[4713]: I0126 16:40:12.029729 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8908455-4d9d-4f3a-8637-242827d8dab9-catalog-content\") pod \"redhat-operators-7z64h\" (UID: \"f8908455-4d9d-4f3a-8637-242827d8dab9\") " pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:12 crc kubenswrapper[4713]: I0126 16:40:12.029987 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8908455-4d9d-4f3a-8637-242827d8dab9-utilities\") pod \"redhat-operators-7z64h\" (UID: \"f8908455-4d9d-4f3a-8637-242827d8dab9\") " pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:12 crc kubenswrapper[4713]: I0126 16:40:12.030547 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8908455-4d9d-4f3a-8637-242827d8dab9-catalog-content\") pod \"redhat-operators-7z64h\" (UID: \"f8908455-4d9d-4f3a-8637-242827d8dab9\") " pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:12 crc kubenswrapper[4713]: I0126 16:40:12.030617 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8908455-4d9d-4f3a-8637-242827d8dab9-utilities\") pod \"redhat-operators-7z64h\" (UID: \"f8908455-4d9d-4f3a-8637-242827d8dab9\") " pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:12 crc kubenswrapper[4713]: I0126 16:40:12.050866 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpn7c\" (UniqueName: \"kubernetes.io/projected/f8908455-4d9d-4f3a-8637-242827d8dab9-kube-api-access-qpn7c\") pod \"redhat-operators-7z64h\" (UID: \"f8908455-4d9d-4f3a-8637-242827d8dab9\") " pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:12 crc kubenswrapper[4713]: I0126 16:40:12.241995 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:12 crc kubenswrapper[4713]: I0126 16:40:12.724302 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7z64h"] Jan 26 16:40:12 crc kubenswrapper[4713]: I0126 16:40:12.895940 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z64h" event={"ID":"f8908455-4d9d-4f3a-8637-242827d8dab9","Type":"ContainerStarted","Data":"19b480a08ffd52da98b9abd9379ecb6d8a1f1a7f952389f123fd59f2740cbd8d"} Jan 26 16:40:13 crc kubenswrapper[4713]: I0126 16:40:13.910677 4713 generic.go:334] "Generic (PLEG): container finished" podID="f8908455-4d9d-4f3a-8637-242827d8dab9" containerID="6144c57600dbc6284e2c0bd8f0dc523b55cbf365944da425add964acdeb53472" exitCode=0 Jan 26 16:40:13 crc kubenswrapper[4713]: I0126 16:40:13.910778 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z64h" event={"ID":"f8908455-4d9d-4f3a-8637-242827d8dab9","Type":"ContainerDied","Data":"6144c57600dbc6284e2c0bd8f0dc523b55cbf365944da425add964acdeb53472"} Jan 26 16:40:13 crc kubenswrapper[4713]: I0126 16:40:13.914625 4713 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:40:15 crc kubenswrapper[4713]: I0126 16:40:15.812859 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:40:15 crc kubenswrapper[4713]: E0126 16:40:15.813721 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:40:15 crc kubenswrapper[4713]: I0126 16:40:15.933620 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z64h" event={"ID":"f8908455-4d9d-4f3a-8637-242827d8dab9","Type":"ContainerStarted","Data":"8b83d1681e472725532677779fce99d767d77c560cf8230c2e5bca99137018cd"} Jan 26 16:40:17 crc kubenswrapper[4713]: I0126 16:40:17.962581 4713 generic.go:334] "Generic (PLEG): container finished" podID="f8908455-4d9d-4f3a-8637-242827d8dab9" containerID="8b83d1681e472725532677779fce99d767d77c560cf8230c2e5bca99137018cd" exitCode=0 Jan 26 16:40:17 crc kubenswrapper[4713]: I0126 16:40:17.963037 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z64h" event={"ID":"f8908455-4d9d-4f3a-8637-242827d8dab9","Type":"ContainerDied","Data":"8b83d1681e472725532677779fce99d767d77c560cf8230c2e5bca99137018cd"} Jan 26 16:40:23 crc kubenswrapper[4713]: I0126 16:40:23.014135 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z64h" event={"ID":"f8908455-4d9d-4f3a-8637-242827d8dab9","Type":"ContainerStarted","Data":"55d67ccb29dfc9a0bea68016a05413fe53bbdcb224f37dda9bbbe5ab9171e5b6"} Jan 26 16:40:23 crc kubenswrapper[4713]: I0126 16:40:23.040104 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7z64h" podStartSLOduration=4.189263471 podStartE2EDuration="12.040085678s" podCreationTimestamp="2026-01-26 16:40:11 +0000 UTC" firstStartedPulling="2026-01-26 16:40:13.914118936 +0000 UTC m=+3989.051136201" lastFinishedPulling="2026-01-26 16:40:21.764941173 +0000 UTC m=+3996.901958408" observedRunningTime="2026-01-26 16:40:23.03272408 +0000 UTC m=+3998.169741315" watchObservedRunningTime="2026-01-26 16:40:23.040085678 +0000 UTC m=+3998.177102913" Jan 26 16:40:27 crc kubenswrapper[4713]: I0126 16:40:27.803983 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:40:27 crc kubenswrapper[4713]: E0126 16:40:27.804723 4713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tn7l2_openshift-machine-config-operator(f608dd80-4cbf-4490-b062-2bef233d25d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" Jan 26 16:40:32 crc kubenswrapper[4713]: I0126 16:40:32.242803 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:32 crc kubenswrapper[4713]: I0126 16:40:32.245021 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:32 crc kubenswrapper[4713]: I0126 16:40:32.312531 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:33 crc kubenswrapper[4713]: I0126 16:40:33.315541 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:33 crc kubenswrapper[4713]: I0126 16:40:33.379496 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7z64h"] Jan 26 16:40:35 crc kubenswrapper[4713]: I0126 16:40:35.287624 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7z64h" podUID="f8908455-4d9d-4f3a-8637-242827d8dab9" containerName="registry-server" containerID="cri-o://55d67ccb29dfc9a0bea68016a05413fe53bbdcb224f37dda9bbbe5ab9171e5b6" gracePeriod=2 Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.078863 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.219946 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpn7c\" (UniqueName: \"kubernetes.io/projected/f8908455-4d9d-4f3a-8637-242827d8dab9-kube-api-access-qpn7c\") pod \"f8908455-4d9d-4f3a-8637-242827d8dab9\" (UID: \"f8908455-4d9d-4f3a-8637-242827d8dab9\") " Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.220004 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8908455-4d9d-4f3a-8637-242827d8dab9-utilities\") pod \"f8908455-4d9d-4f3a-8637-242827d8dab9\" (UID: \"f8908455-4d9d-4f3a-8637-242827d8dab9\") " Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.220061 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8908455-4d9d-4f3a-8637-242827d8dab9-catalog-content\") pod \"f8908455-4d9d-4f3a-8637-242827d8dab9\" (UID: \"f8908455-4d9d-4f3a-8637-242827d8dab9\") " Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.221186 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8908455-4d9d-4f3a-8637-242827d8dab9-utilities" (OuterVolumeSpecName: "utilities") pod "f8908455-4d9d-4f3a-8637-242827d8dab9" (UID: "f8908455-4d9d-4f3a-8637-242827d8dab9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.233079 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8908455-4d9d-4f3a-8637-242827d8dab9-kube-api-access-qpn7c" (OuterVolumeSpecName: "kube-api-access-qpn7c") pod "f8908455-4d9d-4f3a-8637-242827d8dab9" (UID: "f8908455-4d9d-4f3a-8637-242827d8dab9"). InnerVolumeSpecName "kube-api-access-qpn7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.306960 4713 generic.go:334] "Generic (PLEG): container finished" podID="f8908455-4d9d-4f3a-8637-242827d8dab9" containerID="55d67ccb29dfc9a0bea68016a05413fe53bbdcb224f37dda9bbbe5ab9171e5b6" exitCode=0 Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.308204 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z64h" event={"ID":"f8908455-4d9d-4f3a-8637-242827d8dab9","Type":"ContainerDied","Data":"55d67ccb29dfc9a0bea68016a05413fe53bbdcb224f37dda9bbbe5ab9171e5b6"} Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.308313 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z64h" event={"ID":"f8908455-4d9d-4f3a-8637-242827d8dab9","Type":"ContainerDied","Data":"19b480a08ffd52da98b9abd9379ecb6d8a1f1a7f952389f123fd59f2740cbd8d"} Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.308439 4713 scope.go:117] "RemoveContainer" containerID="55d67ccb29dfc9a0bea68016a05413fe53bbdcb224f37dda9bbbe5ab9171e5b6" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.308818 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z64h" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.329982 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpn7c\" (UniqueName: \"kubernetes.io/projected/f8908455-4d9d-4f3a-8637-242827d8dab9-kube-api-access-qpn7c\") on node \"crc\" DevicePath \"\"" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.330021 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8908455-4d9d-4f3a-8637-242827d8dab9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.340616 4713 scope.go:117] "RemoveContainer" containerID="8b83d1681e472725532677779fce99d767d77c560cf8230c2e5bca99137018cd" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.362642 4713 scope.go:117] "RemoveContainer" containerID="6144c57600dbc6284e2c0bd8f0dc523b55cbf365944da425add964acdeb53472" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.365964 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8908455-4d9d-4f3a-8637-242827d8dab9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8908455-4d9d-4f3a-8637-242827d8dab9" (UID: "f8908455-4d9d-4f3a-8637-242827d8dab9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.405776 4713 scope.go:117] "RemoveContainer" containerID="55d67ccb29dfc9a0bea68016a05413fe53bbdcb224f37dda9bbbe5ab9171e5b6" Jan 26 16:40:36 crc kubenswrapper[4713]: E0126 16:40:36.406526 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55d67ccb29dfc9a0bea68016a05413fe53bbdcb224f37dda9bbbe5ab9171e5b6\": container with ID starting with 55d67ccb29dfc9a0bea68016a05413fe53bbdcb224f37dda9bbbe5ab9171e5b6 not found: ID does not exist" containerID="55d67ccb29dfc9a0bea68016a05413fe53bbdcb224f37dda9bbbe5ab9171e5b6" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.406569 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55d67ccb29dfc9a0bea68016a05413fe53bbdcb224f37dda9bbbe5ab9171e5b6"} err="failed to get container status \"55d67ccb29dfc9a0bea68016a05413fe53bbdcb224f37dda9bbbe5ab9171e5b6\": rpc error: code = NotFound desc = could not find container \"55d67ccb29dfc9a0bea68016a05413fe53bbdcb224f37dda9bbbe5ab9171e5b6\": container with ID starting with 55d67ccb29dfc9a0bea68016a05413fe53bbdcb224f37dda9bbbe5ab9171e5b6 not found: ID does not exist" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.406591 4713 scope.go:117] "RemoveContainer" containerID="8b83d1681e472725532677779fce99d767d77c560cf8230c2e5bca99137018cd" Jan 26 16:40:36 crc kubenswrapper[4713]: E0126 16:40:36.406834 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b83d1681e472725532677779fce99d767d77c560cf8230c2e5bca99137018cd\": container with ID starting with 8b83d1681e472725532677779fce99d767d77c560cf8230c2e5bca99137018cd not found: ID does not exist" containerID="8b83d1681e472725532677779fce99d767d77c560cf8230c2e5bca99137018cd" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.406861 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b83d1681e472725532677779fce99d767d77c560cf8230c2e5bca99137018cd"} err="failed to get container status \"8b83d1681e472725532677779fce99d767d77c560cf8230c2e5bca99137018cd\": rpc error: code = NotFound desc = could not find container \"8b83d1681e472725532677779fce99d767d77c560cf8230c2e5bca99137018cd\": container with ID starting with 8b83d1681e472725532677779fce99d767d77c560cf8230c2e5bca99137018cd not found: ID does not exist" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.406879 4713 scope.go:117] "RemoveContainer" containerID="6144c57600dbc6284e2c0bd8f0dc523b55cbf365944da425add964acdeb53472" Jan 26 16:40:36 crc kubenswrapper[4713]: E0126 16:40:36.407132 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6144c57600dbc6284e2c0bd8f0dc523b55cbf365944da425add964acdeb53472\": container with ID starting with 6144c57600dbc6284e2c0bd8f0dc523b55cbf365944da425add964acdeb53472 not found: ID does not exist" containerID="6144c57600dbc6284e2c0bd8f0dc523b55cbf365944da425add964acdeb53472" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.407158 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6144c57600dbc6284e2c0bd8f0dc523b55cbf365944da425add964acdeb53472"} err="failed to get container status \"6144c57600dbc6284e2c0bd8f0dc523b55cbf365944da425add964acdeb53472\": rpc error: code = NotFound desc = could not find container \"6144c57600dbc6284e2c0bd8f0dc523b55cbf365944da425add964acdeb53472\": container with ID starting with 6144c57600dbc6284e2c0bd8f0dc523b55cbf365944da425add964acdeb53472 not found: ID does not exist" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.432229 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8908455-4d9d-4f3a-8637-242827d8dab9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.647551 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7z64h"] Jan 26 16:40:36 crc kubenswrapper[4713]: I0126 16:40:36.659874 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7z64h"] Jan 26 16:40:37 crc kubenswrapper[4713]: I0126 16:40:37.823163 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8908455-4d9d-4f3a-8637-242827d8dab9" path="/var/lib/kubelet/pods/f8908455-4d9d-4f3a-8637-242827d8dab9/volumes" Jan 26 16:40:38 crc kubenswrapper[4713]: I0126 16:40:38.805286 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:40:40 crc kubenswrapper[4713]: I0126 16:40:40.373239 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"638bb20bb13c5a5c6df92c076848792bd84efacc47b853e8962817836f553ae8"} Jan 26 16:40:44 crc kubenswrapper[4713]: I0126 16:40:44.424135 4713 generic.go:334] "Generic (PLEG): container finished" podID="0d6370a0-f234-4f00-a9da-f166704c4278" containerID="3372a754ed90a6cde6b31e51fb834cd5ab29815ba53de9636fd291208780833e" exitCode=0 Jan 26 16:40:44 crc kubenswrapper[4713]: I0126 16:40:44.424191 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-plrnd/must-gather-6jmrh" event={"ID":"0d6370a0-f234-4f00-a9da-f166704c4278","Type":"ContainerDied","Data":"3372a754ed90a6cde6b31e51fb834cd5ab29815ba53de9636fd291208780833e"} Jan 26 16:40:44 crc kubenswrapper[4713]: I0126 16:40:44.425481 4713 scope.go:117] "RemoveContainer" containerID="3372a754ed90a6cde6b31e51fb834cd5ab29815ba53de9636fd291208780833e" Jan 26 16:40:44 crc kubenswrapper[4713]: I0126 16:40:44.623579 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-plrnd_must-gather-6jmrh_0d6370a0-f234-4f00-a9da-f166704c4278/gather/0.log" Jan 26 16:40:53 crc kubenswrapper[4713]: I0126 16:40:53.281538 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-plrnd/must-gather-6jmrh"] Jan 26 16:40:53 crc kubenswrapper[4713]: I0126 16:40:53.282256 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-plrnd/must-gather-6jmrh" podUID="0d6370a0-f234-4f00-a9da-f166704c4278" containerName="copy" containerID="cri-o://5a3b743afcb05eafa811e22722d7d7e3a73f8815f4f947e88f916d679be29e65" gracePeriod=2 Jan 26 16:40:53 crc kubenswrapper[4713]: I0126 16:40:53.290253 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-plrnd/must-gather-6jmrh"] Jan 26 16:40:53 crc kubenswrapper[4713]: I0126 16:40:53.543034 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-plrnd_must-gather-6jmrh_0d6370a0-f234-4f00-a9da-f166704c4278/copy/0.log" Jan 26 16:40:53 crc kubenswrapper[4713]: I0126 16:40:53.543957 4713 generic.go:334] "Generic (PLEG): container finished" podID="0d6370a0-f234-4f00-a9da-f166704c4278" containerID="5a3b743afcb05eafa811e22722d7d7e3a73f8815f4f947e88f916d679be29e65" exitCode=143 Jan 26 16:40:53 crc kubenswrapper[4713]: I0126 16:40:53.893579 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-plrnd_must-gather-6jmrh_0d6370a0-f234-4f00-a9da-f166704c4278/copy/0.log" Jan 26 16:40:53 crc kubenswrapper[4713]: I0126 16:40:53.894150 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/must-gather-6jmrh" Jan 26 16:40:54 crc kubenswrapper[4713]: I0126 16:40:54.012067 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d6370a0-f234-4f00-a9da-f166704c4278-must-gather-output\") pod \"0d6370a0-f234-4f00-a9da-f166704c4278\" (UID: \"0d6370a0-f234-4f00-a9da-f166704c4278\") " Jan 26 16:40:54 crc kubenswrapper[4713]: I0126 16:40:54.012382 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66mkr\" (UniqueName: \"kubernetes.io/projected/0d6370a0-f234-4f00-a9da-f166704c4278-kube-api-access-66mkr\") pod \"0d6370a0-f234-4f00-a9da-f166704c4278\" (UID: \"0d6370a0-f234-4f00-a9da-f166704c4278\") " Jan 26 16:40:54 crc kubenswrapper[4713]: I0126 16:40:54.029715 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d6370a0-f234-4f00-a9da-f166704c4278-kube-api-access-66mkr" (OuterVolumeSpecName: "kube-api-access-66mkr") pod "0d6370a0-f234-4f00-a9da-f166704c4278" (UID: "0d6370a0-f234-4f00-a9da-f166704c4278"). InnerVolumeSpecName "kube-api-access-66mkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:40:54 crc kubenswrapper[4713]: I0126 16:40:54.115321 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66mkr\" (UniqueName: \"kubernetes.io/projected/0d6370a0-f234-4f00-a9da-f166704c4278-kube-api-access-66mkr\") on node \"crc\" DevicePath \"\"" Jan 26 16:40:54 crc kubenswrapper[4713]: I0126 16:40:54.201374 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d6370a0-f234-4f00-a9da-f166704c4278-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0d6370a0-f234-4f00-a9da-f166704c4278" (UID: "0d6370a0-f234-4f00-a9da-f166704c4278"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:40:54 crc kubenswrapper[4713]: I0126 16:40:54.218735 4713 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d6370a0-f234-4f00-a9da-f166704c4278-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 16:40:54 crc kubenswrapper[4713]: I0126 16:40:54.591894 4713 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-plrnd_must-gather-6jmrh_0d6370a0-f234-4f00-a9da-f166704c4278/copy/0.log" Jan 26 16:40:54 crc kubenswrapper[4713]: I0126 16:40:54.594660 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-plrnd/must-gather-6jmrh" Jan 26 16:40:54 crc kubenswrapper[4713]: I0126 16:40:54.595150 4713 scope.go:117] "RemoveContainer" containerID="5a3b743afcb05eafa811e22722d7d7e3a73f8815f4f947e88f916d679be29e65" Jan 26 16:40:54 crc kubenswrapper[4713]: I0126 16:40:54.628123 4713 scope.go:117] "RemoveContainer" containerID="3372a754ed90a6cde6b31e51fb834cd5ab29815ba53de9636fd291208780833e" Jan 26 16:40:55 crc kubenswrapper[4713]: I0126 16:40:55.814030 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d6370a0-f234-4f00-a9da-f166704c4278" path="/var/lib/kubelet/pods/0d6370a0-f234-4f00-a9da-f166704c4278/volumes" Jan 26 16:43:03 crc kubenswrapper[4713]: I0126 16:43:03.301435 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:43:03 crc kubenswrapper[4713]: I0126 16:43:03.301934 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.762495 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-286n6"] Jan 26 16:43:11 crc kubenswrapper[4713]: E0126 16:43:11.763399 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8908455-4d9d-4f3a-8637-242827d8dab9" containerName="extract-content" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.763412 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8908455-4d9d-4f3a-8637-242827d8dab9" containerName="extract-content" Jan 26 16:43:11 crc kubenswrapper[4713]: E0126 16:43:11.763436 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8908455-4d9d-4f3a-8637-242827d8dab9" containerName="registry-server" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.763443 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8908455-4d9d-4f3a-8637-242827d8dab9" containerName="registry-server" Jan 26 16:43:11 crc kubenswrapper[4713]: E0126 16:43:11.763457 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8908455-4d9d-4f3a-8637-242827d8dab9" containerName="extract-utilities" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.763464 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8908455-4d9d-4f3a-8637-242827d8dab9" containerName="extract-utilities" Jan 26 16:43:11 crc kubenswrapper[4713]: E0126 16:43:11.763489 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6370a0-f234-4f00-a9da-f166704c4278" containerName="gather" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.763494 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6370a0-f234-4f00-a9da-f166704c4278" containerName="gather" Jan 26 16:43:11 crc kubenswrapper[4713]: E0126 16:43:11.763517 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6370a0-f234-4f00-a9da-f166704c4278" containerName="copy" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.763523 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6370a0-f234-4f00-a9da-f166704c4278" containerName="copy" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.763707 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d6370a0-f234-4f00-a9da-f166704c4278" containerName="copy" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.763735 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d6370a0-f234-4f00-a9da-f166704c4278" containerName="gather" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.763748 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8908455-4d9d-4f3a-8637-242827d8dab9" containerName="registry-server" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.765205 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.781142 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-286n6"] Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.870162 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bcd7630-4628-483a-9fd1-ca3463712574-utilities\") pod \"community-operators-286n6\" (UID: \"7bcd7630-4628-483a-9fd1-ca3463712574\") " pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.870624 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgbm8\" (UniqueName: \"kubernetes.io/projected/7bcd7630-4628-483a-9fd1-ca3463712574-kube-api-access-zgbm8\") pod \"community-operators-286n6\" (UID: \"7bcd7630-4628-483a-9fd1-ca3463712574\") " pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.870671 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bcd7630-4628-483a-9fd1-ca3463712574-catalog-content\") pod \"community-operators-286n6\" (UID: \"7bcd7630-4628-483a-9fd1-ca3463712574\") " pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.972250 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgbm8\" (UniqueName: \"kubernetes.io/projected/7bcd7630-4628-483a-9fd1-ca3463712574-kube-api-access-zgbm8\") pod \"community-operators-286n6\" (UID: \"7bcd7630-4628-483a-9fd1-ca3463712574\") " pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.972318 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bcd7630-4628-483a-9fd1-ca3463712574-catalog-content\") pod \"community-operators-286n6\" (UID: \"7bcd7630-4628-483a-9fd1-ca3463712574\") " pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.972496 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bcd7630-4628-483a-9fd1-ca3463712574-utilities\") pod \"community-operators-286n6\" (UID: \"7bcd7630-4628-483a-9fd1-ca3463712574\") " pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.973098 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bcd7630-4628-483a-9fd1-ca3463712574-catalog-content\") pod \"community-operators-286n6\" (UID: \"7bcd7630-4628-483a-9fd1-ca3463712574\") " pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.973112 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bcd7630-4628-483a-9fd1-ca3463712574-utilities\") pod \"community-operators-286n6\" (UID: \"7bcd7630-4628-483a-9fd1-ca3463712574\") " pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:11 crc kubenswrapper[4713]: I0126 16:43:11.990266 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgbm8\" (UniqueName: \"kubernetes.io/projected/7bcd7630-4628-483a-9fd1-ca3463712574-kube-api-access-zgbm8\") pod \"community-operators-286n6\" (UID: \"7bcd7630-4628-483a-9fd1-ca3463712574\") " pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:12 crc kubenswrapper[4713]: I0126 16:43:12.089756 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:12 crc kubenswrapper[4713]: I0126 16:43:12.800264 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-286n6"] Jan 26 16:43:13 crc kubenswrapper[4713]: I0126 16:43:13.035668 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-286n6" event={"ID":"7bcd7630-4628-483a-9fd1-ca3463712574","Type":"ContainerStarted","Data":"de212ba42109e97c4c53b25e6f075a789a4457f148abb741216e4b2bbb7dbbff"} Jan 26 16:43:13 crc kubenswrapper[4713]: I0126 16:43:13.035997 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-286n6" event={"ID":"7bcd7630-4628-483a-9fd1-ca3463712574","Type":"ContainerStarted","Data":"d06cf6c3e3de5703a397419a52238259fac416f221a06f9d341c27cca6dc9e32"} Jan 26 16:43:14 crc kubenswrapper[4713]: I0126 16:43:14.048465 4713 generic.go:334] "Generic (PLEG): container finished" podID="7bcd7630-4628-483a-9fd1-ca3463712574" containerID="de212ba42109e97c4c53b25e6f075a789a4457f148abb741216e4b2bbb7dbbff" exitCode=0 Jan 26 16:43:14 crc kubenswrapper[4713]: I0126 16:43:14.048527 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-286n6" event={"ID":"7bcd7630-4628-483a-9fd1-ca3463712574","Type":"ContainerDied","Data":"de212ba42109e97c4c53b25e6f075a789a4457f148abb741216e4b2bbb7dbbff"} Jan 26 16:43:14 crc kubenswrapper[4713]: I0126 16:43:14.049846 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-286n6" event={"ID":"7bcd7630-4628-483a-9fd1-ca3463712574","Type":"ContainerStarted","Data":"7307056ff96ed0f8b16682a838bb6036a3f93084099809f26b36d30a6115717c"} Jan 26 16:43:15 crc kubenswrapper[4713]: I0126 16:43:15.061546 4713 generic.go:334] "Generic (PLEG): container finished" podID="7bcd7630-4628-483a-9fd1-ca3463712574" containerID="7307056ff96ed0f8b16682a838bb6036a3f93084099809f26b36d30a6115717c" exitCode=0 Jan 26 16:43:15 crc kubenswrapper[4713]: I0126 16:43:15.061824 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-286n6" event={"ID":"7bcd7630-4628-483a-9fd1-ca3463712574","Type":"ContainerDied","Data":"7307056ff96ed0f8b16682a838bb6036a3f93084099809f26b36d30a6115717c"} Jan 26 16:43:16 crc kubenswrapper[4713]: I0126 16:43:16.072827 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-286n6" event={"ID":"7bcd7630-4628-483a-9fd1-ca3463712574","Type":"ContainerStarted","Data":"d77d9ee7dd1aa8d13e1b226f45422c195633ac65aa225993c3eb6712a9774f38"} Jan 26 16:43:16 crc kubenswrapper[4713]: I0126 16:43:16.105591 4713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-286n6" podStartSLOduration=2.618310572 podStartE2EDuration="5.105567231s" podCreationTimestamp="2026-01-26 16:43:11 +0000 UTC" firstStartedPulling="2026-01-26 16:43:13.037157044 +0000 UTC m=+4168.174174279" lastFinishedPulling="2026-01-26 16:43:15.524413693 +0000 UTC m=+4170.661430938" observedRunningTime="2026-01-26 16:43:16.09527222 +0000 UTC m=+4171.232289465" watchObservedRunningTime="2026-01-26 16:43:16.105567231 +0000 UTC m=+4171.242584466" Jan 26 16:43:22 crc kubenswrapper[4713]: I0126 16:43:22.090882 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:22 crc kubenswrapper[4713]: I0126 16:43:22.091495 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:22 crc kubenswrapper[4713]: I0126 16:43:22.164469 4713 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:22 crc kubenswrapper[4713]: I0126 16:43:22.234146 4713 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:22 crc kubenswrapper[4713]: I0126 16:43:22.417637 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-286n6"] Jan 26 16:43:24 crc kubenswrapper[4713]: I0126 16:43:24.153556 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-286n6" podUID="7bcd7630-4628-483a-9fd1-ca3463712574" containerName="registry-server" containerID="cri-o://d77d9ee7dd1aa8d13e1b226f45422c195633ac65aa225993c3eb6712a9774f38" gracePeriod=2 Jan 26 16:43:24 crc kubenswrapper[4713]: I0126 16:43:24.969341 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.049026 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bcd7630-4628-483a-9fd1-ca3463712574-utilities\") pod \"7bcd7630-4628-483a-9fd1-ca3463712574\" (UID: \"7bcd7630-4628-483a-9fd1-ca3463712574\") " Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.049093 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgbm8\" (UniqueName: \"kubernetes.io/projected/7bcd7630-4628-483a-9fd1-ca3463712574-kube-api-access-zgbm8\") pod \"7bcd7630-4628-483a-9fd1-ca3463712574\" (UID: \"7bcd7630-4628-483a-9fd1-ca3463712574\") " Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.049221 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bcd7630-4628-483a-9fd1-ca3463712574-catalog-content\") pod \"7bcd7630-4628-483a-9fd1-ca3463712574\" (UID: \"7bcd7630-4628-483a-9fd1-ca3463712574\") " Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.049790 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bcd7630-4628-483a-9fd1-ca3463712574-utilities" (OuterVolumeSpecName: "utilities") pod "7bcd7630-4628-483a-9fd1-ca3463712574" (UID: "7bcd7630-4628-483a-9fd1-ca3463712574"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.056098 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bcd7630-4628-483a-9fd1-ca3463712574-kube-api-access-zgbm8" (OuterVolumeSpecName: "kube-api-access-zgbm8") pod "7bcd7630-4628-483a-9fd1-ca3463712574" (UID: "7bcd7630-4628-483a-9fd1-ca3463712574"). InnerVolumeSpecName "kube-api-access-zgbm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.103850 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bcd7630-4628-483a-9fd1-ca3463712574-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bcd7630-4628-483a-9fd1-ca3463712574" (UID: "7bcd7630-4628-483a-9fd1-ca3463712574"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.152027 4713 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bcd7630-4628-483a-9fd1-ca3463712574-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.152060 4713 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bcd7630-4628-483a-9fd1-ca3463712574-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.152071 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgbm8\" (UniqueName: \"kubernetes.io/projected/7bcd7630-4628-483a-9fd1-ca3463712574-kube-api-access-zgbm8\") on node \"crc\" DevicePath \"\"" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.165765 4713 generic.go:334] "Generic (PLEG): container finished" podID="7bcd7630-4628-483a-9fd1-ca3463712574" containerID="d77d9ee7dd1aa8d13e1b226f45422c195633ac65aa225993c3eb6712a9774f38" exitCode=0 Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.165917 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-286n6" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.165951 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-286n6" event={"ID":"7bcd7630-4628-483a-9fd1-ca3463712574","Type":"ContainerDied","Data":"d77d9ee7dd1aa8d13e1b226f45422c195633ac65aa225993c3eb6712a9774f38"} Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.166183 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-286n6" event={"ID":"7bcd7630-4628-483a-9fd1-ca3463712574","Type":"ContainerDied","Data":"d06cf6c3e3de5703a397419a52238259fac416f221a06f9d341c27cca6dc9e32"} Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.166230 4713 scope.go:117] "RemoveContainer" containerID="d77d9ee7dd1aa8d13e1b226f45422c195633ac65aa225993c3eb6712a9774f38" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.198250 4713 scope.go:117] "RemoveContainer" containerID="7307056ff96ed0f8b16682a838bb6036a3f93084099809f26b36d30a6115717c" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.210745 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-286n6"] Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.219033 4713 scope.go:117] "RemoveContainer" containerID="de212ba42109e97c4c53b25e6f075a789a4457f148abb741216e4b2bbb7dbbff" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.227780 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-286n6"] Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.286070 4713 scope.go:117] "RemoveContainer" containerID="d77d9ee7dd1aa8d13e1b226f45422c195633ac65aa225993c3eb6712a9774f38" Jan 26 16:43:25 crc kubenswrapper[4713]: E0126 16:43:25.286735 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d77d9ee7dd1aa8d13e1b226f45422c195633ac65aa225993c3eb6712a9774f38\": container with ID starting with d77d9ee7dd1aa8d13e1b226f45422c195633ac65aa225993c3eb6712a9774f38 not found: ID does not exist" containerID="d77d9ee7dd1aa8d13e1b226f45422c195633ac65aa225993c3eb6712a9774f38" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.286824 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d77d9ee7dd1aa8d13e1b226f45422c195633ac65aa225993c3eb6712a9774f38"} err="failed to get container status \"d77d9ee7dd1aa8d13e1b226f45422c195633ac65aa225993c3eb6712a9774f38\": rpc error: code = NotFound desc = could not find container \"d77d9ee7dd1aa8d13e1b226f45422c195633ac65aa225993c3eb6712a9774f38\": container with ID starting with d77d9ee7dd1aa8d13e1b226f45422c195633ac65aa225993c3eb6712a9774f38 not found: ID does not exist" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.286909 4713 scope.go:117] "RemoveContainer" containerID="7307056ff96ed0f8b16682a838bb6036a3f93084099809f26b36d30a6115717c" Jan 26 16:43:25 crc kubenswrapper[4713]: E0126 16:43:25.287438 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7307056ff96ed0f8b16682a838bb6036a3f93084099809f26b36d30a6115717c\": container with ID starting with 7307056ff96ed0f8b16682a838bb6036a3f93084099809f26b36d30a6115717c not found: ID does not exist" containerID="7307056ff96ed0f8b16682a838bb6036a3f93084099809f26b36d30a6115717c" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.287508 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7307056ff96ed0f8b16682a838bb6036a3f93084099809f26b36d30a6115717c"} err="failed to get container status \"7307056ff96ed0f8b16682a838bb6036a3f93084099809f26b36d30a6115717c\": rpc error: code = NotFound desc = could not find container \"7307056ff96ed0f8b16682a838bb6036a3f93084099809f26b36d30a6115717c\": container with ID starting with 7307056ff96ed0f8b16682a838bb6036a3f93084099809f26b36d30a6115717c not found: ID does not exist" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.287577 4713 scope.go:117] "RemoveContainer" containerID="de212ba42109e97c4c53b25e6f075a789a4457f148abb741216e4b2bbb7dbbff" Jan 26 16:43:25 crc kubenswrapper[4713]: E0126 16:43:25.287920 4713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de212ba42109e97c4c53b25e6f075a789a4457f148abb741216e4b2bbb7dbbff\": container with ID starting with de212ba42109e97c4c53b25e6f075a789a4457f148abb741216e4b2bbb7dbbff not found: ID does not exist" containerID="de212ba42109e97c4c53b25e6f075a789a4457f148abb741216e4b2bbb7dbbff" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.287959 4713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de212ba42109e97c4c53b25e6f075a789a4457f148abb741216e4b2bbb7dbbff"} err="failed to get container status \"de212ba42109e97c4c53b25e6f075a789a4457f148abb741216e4b2bbb7dbbff\": rpc error: code = NotFound desc = could not find container \"de212ba42109e97c4c53b25e6f075a789a4457f148abb741216e4b2bbb7dbbff\": container with ID starting with de212ba42109e97c4c53b25e6f075a789a4457f148abb741216e4b2bbb7dbbff not found: ID does not exist" Jan 26 16:43:25 crc kubenswrapper[4713]: I0126 16:43:25.821668 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bcd7630-4628-483a-9fd1-ca3463712574" path="/var/lib/kubelet/pods/7bcd7630-4628-483a-9fd1-ca3463712574/volumes" Jan 26 16:43:33 crc kubenswrapper[4713]: I0126 16:43:33.301583 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:43:33 crc kubenswrapper[4713]: I0126 16:43:33.302345 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:44:03 crc kubenswrapper[4713]: I0126 16:44:03.301680 4713 patch_prober.go:28] interesting pod/machine-config-daemon-tn7l2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:44:03 crc kubenswrapper[4713]: I0126 16:44:03.303101 4713 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:44:03 crc kubenswrapper[4713]: I0126 16:44:03.303189 4713 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" Jan 26 16:44:03 crc kubenswrapper[4713]: I0126 16:44:03.304276 4713 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"638bb20bb13c5a5c6df92c076848792bd84efacc47b853e8962817836f553ae8"} pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:44:03 crc kubenswrapper[4713]: I0126 16:44:03.304341 4713 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" podUID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerName="machine-config-daemon" containerID="cri-o://638bb20bb13c5a5c6df92c076848792bd84efacc47b853e8962817836f553ae8" gracePeriod=600 Jan 26 16:44:03 crc kubenswrapper[4713]: I0126 16:44:03.625704 4713 generic.go:334] "Generic (PLEG): container finished" podID="f608dd80-4cbf-4490-b062-2bef233d25d1" containerID="638bb20bb13c5a5c6df92c076848792bd84efacc47b853e8962817836f553ae8" exitCode=0 Jan 26 16:44:03 crc kubenswrapper[4713]: I0126 16:44:03.625796 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerDied","Data":"638bb20bb13c5a5c6df92c076848792bd84efacc47b853e8962817836f553ae8"} Jan 26 16:44:03 crc kubenswrapper[4713]: I0126 16:44:03.626171 4713 scope.go:117] "RemoveContainer" containerID="55eeb958b24ce11f1792c076444f298520cd9fefee6839d70dfe73cf27ac94de" Jan 26 16:44:04 crc kubenswrapper[4713]: I0126 16:44:04.638944 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tn7l2" event={"ID":"f608dd80-4cbf-4490-b062-2bef233d25d1","Type":"ContainerStarted","Data":"81fd928cea1e9b10b047182f8a900eff09e56f757daa527614121adebc838ad1"} Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.193092 4713 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz"] Jan 26 16:45:00 crc kubenswrapper[4713]: E0126 16:45:00.194099 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bcd7630-4628-483a-9fd1-ca3463712574" containerName="extract-utilities" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.194120 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bcd7630-4628-483a-9fd1-ca3463712574" containerName="extract-utilities" Jan 26 16:45:00 crc kubenswrapper[4713]: E0126 16:45:00.194163 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bcd7630-4628-483a-9fd1-ca3463712574" containerName="registry-server" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.194172 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bcd7630-4628-483a-9fd1-ca3463712574" containerName="registry-server" Jan 26 16:45:00 crc kubenswrapper[4713]: E0126 16:45:00.194196 4713 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bcd7630-4628-483a-9fd1-ca3463712574" containerName="extract-content" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.194205 4713 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bcd7630-4628-483a-9fd1-ca3463712574" containerName="extract-content" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.194501 4713 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bcd7630-4628-483a-9fd1-ca3463712574" containerName="registry-server" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.197158 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.199649 4713 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.199914 4713 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.207167 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz"] Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.288767 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70f425eb-88c4-44d7-9591-d9ecaabe0476-secret-volume\") pod \"collect-profiles-29490765-wbvdz\" (UID: \"70f425eb-88c4-44d7-9591-d9ecaabe0476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.288820 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70f425eb-88c4-44d7-9591-d9ecaabe0476-config-volume\") pod \"collect-profiles-29490765-wbvdz\" (UID: \"70f425eb-88c4-44d7-9591-d9ecaabe0476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.288939 4713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q4hn\" (UniqueName: \"kubernetes.io/projected/70f425eb-88c4-44d7-9591-d9ecaabe0476-kube-api-access-7q4hn\") pod \"collect-profiles-29490765-wbvdz\" (UID: \"70f425eb-88c4-44d7-9591-d9ecaabe0476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.390915 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q4hn\" (UniqueName: \"kubernetes.io/projected/70f425eb-88c4-44d7-9591-d9ecaabe0476-kube-api-access-7q4hn\") pod \"collect-profiles-29490765-wbvdz\" (UID: \"70f425eb-88c4-44d7-9591-d9ecaabe0476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.391124 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70f425eb-88c4-44d7-9591-d9ecaabe0476-secret-volume\") pod \"collect-profiles-29490765-wbvdz\" (UID: \"70f425eb-88c4-44d7-9591-d9ecaabe0476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.391181 4713 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70f425eb-88c4-44d7-9591-d9ecaabe0476-config-volume\") pod \"collect-profiles-29490765-wbvdz\" (UID: \"70f425eb-88c4-44d7-9591-d9ecaabe0476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.392498 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70f425eb-88c4-44d7-9591-d9ecaabe0476-config-volume\") pod \"collect-profiles-29490765-wbvdz\" (UID: \"70f425eb-88c4-44d7-9591-d9ecaabe0476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.402402 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70f425eb-88c4-44d7-9591-d9ecaabe0476-secret-volume\") pod \"collect-profiles-29490765-wbvdz\" (UID: \"70f425eb-88c4-44d7-9591-d9ecaabe0476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.408242 4713 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q4hn\" (UniqueName: \"kubernetes.io/projected/70f425eb-88c4-44d7-9591-d9ecaabe0476-kube-api-access-7q4hn\") pod \"collect-profiles-29490765-wbvdz\" (UID: \"70f425eb-88c4-44d7-9591-d9ecaabe0476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" Jan 26 16:45:00 crc kubenswrapper[4713]: I0126 16:45:00.515752 4713 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" Jan 26 16:45:01 crc kubenswrapper[4713]: I0126 16:45:01.008750 4713 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz"] Jan 26 16:45:01 crc kubenswrapper[4713]: I0126 16:45:01.271573 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" event={"ID":"70f425eb-88c4-44d7-9591-d9ecaabe0476","Type":"ContainerStarted","Data":"db2c33c8d0aed22149625201e1c32c7e727c9a4efb1c75fa6cdc645ead806410"} Jan 26 16:45:02 crc kubenswrapper[4713]: I0126 16:45:02.283853 4713 generic.go:334] "Generic (PLEG): container finished" podID="70f425eb-88c4-44d7-9591-d9ecaabe0476" containerID="2536499753c1fe177d7b49fd71750d2a604178355ee3feec7718a2574d6192c5" exitCode=0 Jan 26 16:45:02 crc kubenswrapper[4713]: I0126 16:45:02.283949 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" event={"ID":"70f425eb-88c4-44d7-9591-d9ecaabe0476","Type":"ContainerDied","Data":"2536499753c1fe177d7b49fd71750d2a604178355ee3feec7718a2574d6192c5"} Jan 26 16:45:03 crc kubenswrapper[4713]: I0126 16:45:03.801764 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" Jan 26 16:45:03 crc kubenswrapper[4713]: I0126 16:45:03.966140 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q4hn\" (UniqueName: \"kubernetes.io/projected/70f425eb-88c4-44d7-9591-d9ecaabe0476-kube-api-access-7q4hn\") pod \"70f425eb-88c4-44d7-9591-d9ecaabe0476\" (UID: \"70f425eb-88c4-44d7-9591-d9ecaabe0476\") " Jan 26 16:45:03 crc kubenswrapper[4713]: I0126 16:45:03.966251 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70f425eb-88c4-44d7-9591-d9ecaabe0476-config-volume\") pod \"70f425eb-88c4-44d7-9591-d9ecaabe0476\" (UID: \"70f425eb-88c4-44d7-9591-d9ecaabe0476\") " Jan 26 16:45:03 crc kubenswrapper[4713]: I0126 16:45:03.966545 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70f425eb-88c4-44d7-9591-d9ecaabe0476-secret-volume\") pod \"70f425eb-88c4-44d7-9591-d9ecaabe0476\" (UID: \"70f425eb-88c4-44d7-9591-d9ecaabe0476\") " Jan 26 16:45:03 crc kubenswrapper[4713]: I0126 16:45:03.967499 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f425eb-88c4-44d7-9591-d9ecaabe0476-config-volume" (OuterVolumeSpecName: "config-volume") pod "70f425eb-88c4-44d7-9591-d9ecaabe0476" (UID: "70f425eb-88c4-44d7-9591-d9ecaabe0476"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:45:04 crc kubenswrapper[4713]: I0126 16:45:04.069641 4713 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70f425eb-88c4-44d7-9591-d9ecaabe0476-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:45:04 crc kubenswrapper[4713]: I0126 16:45:04.305043 4713 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" event={"ID":"70f425eb-88c4-44d7-9591-d9ecaabe0476","Type":"ContainerDied","Data":"db2c33c8d0aed22149625201e1c32c7e727c9a4efb1c75fa6cdc645ead806410"} Jan 26 16:45:04 crc kubenswrapper[4713]: I0126 16:45:04.305081 4713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db2c33c8d0aed22149625201e1c32c7e727c9a4efb1c75fa6cdc645ead806410" Jan 26 16:45:04 crc kubenswrapper[4713]: I0126 16:45:04.305129 4713 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-wbvdz" Jan 26 16:45:04 crc kubenswrapper[4713]: I0126 16:45:04.681579 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f425eb-88c4-44d7-9591-d9ecaabe0476-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "70f425eb-88c4-44d7-9591-d9ecaabe0476" (UID: "70f425eb-88c4-44d7-9591-d9ecaabe0476"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:45:04 crc kubenswrapper[4713]: I0126 16:45:04.681718 4713 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70f425eb-88c4-44d7-9591-d9ecaabe0476-secret-volume\") pod \"70f425eb-88c4-44d7-9591-d9ecaabe0476\" (UID: \"70f425eb-88c4-44d7-9591-d9ecaabe0476\") " Jan 26 16:45:04 crc kubenswrapper[4713]: W0126 16:45:04.682533 4713 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/70f425eb-88c4-44d7-9591-d9ecaabe0476/volumes/kubernetes.io~secret/secret-volume Jan 26 16:45:04 crc kubenswrapper[4713]: I0126 16:45:04.682564 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f425eb-88c4-44d7-9591-d9ecaabe0476-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "70f425eb-88c4-44d7-9591-d9ecaabe0476" (UID: "70f425eb-88c4-44d7-9591-d9ecaabe0476"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:45:04 crc kubenswrapper[4713]: I0126 16:45:04.691537 4713 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f425eb-88c4-44d7-9591-d9ecaabe0476-kube-api-access-7q4hn" (OuterVolumeSpecName: "kube-api-access-7q4hn") pod "70f425eb-88c4-44d7-9591-d9ecaabe0476" (UID: "70f425eb-88c4-44d7-9591-d9ecaabe0476"). InnerVolumeSpecName "kube-api-access-7q4hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:45:04 crc kubenswrapper[4713]: I0126 16:45:04.784159 4713 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q4hn\" (UniqueName: \"kubernetes.io/projected/70f425eb-88c4-44d7-9591-d9ecaabe0476-kube-api-access-7q4hn\") on node \"crc\" DevicePath \"\"" Jan 26 16:45:04 crc kubenswrapper[4713]: I0126 16:45:04.784196 4713 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70f425eb-88c4-44d7-9591-d9ecaabe0476-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:45:04 crc kubenswrapper[4713]: I0126 16:45:04.898327 4713 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp"] Jan 26 16:45:04 crc kubenswrapper[4713]: I0126 16:45:04.908297 4713 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-lpnkp"] Jan 26 16:45:05 crc kubenswrapper[4713]: I0126 16:45:05.820417 4713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="814d989f-aaa7-4c73-8192-f7bc58d0be57" path="/var/lib/kubelet/pods/814d989f-aaa7-4c73-8192-f7bc58d0be57/volumes"